Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 2h1m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc000ad4de0>: { error: <*errors.withMessage | 0xc0000ac9a0>{ cause: <*errors.errorString | 0xc0019dd2c0>{ s: "error container run failed with exit code 137", }, msg: "Unable to run conformance tests", }, stack: [0x1a98018, 0x1adc429, 0x7b9731, 0x7b9125, 0x7b87fb, 0x7be569, 0x7bdf52, 0x7df031, 0x7ded56, 0x7de3a5, 0x7e07e5, 0x7ec9c9, 0x7ec7de, 0x1af7d32, 0x523bab, 0x46e1e1], } Unable to run conformance tests: error container run failed with exit code 137 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-z2xsbw INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-z2xsbw" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-viu2kk" using the "upgrades-cgroupfs" template (Kubernetes v1.22.17, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-viu2kk --infrastructure (default) --kubernetes-version v1.22.17 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-viu2kk-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-viu2kk-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-viu2kk-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-viu2kk-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-viu2kk created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-viu2kk-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-viu2kk-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-z2xsbw/k8s-upgrade-and-conformance-viu2kk-9mv29 to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-z2xsbw/k8s-upgrade-and-conformance-viu2kk-9mv29 to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.23.15 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-z2xsbw/k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs to be upgraded to v1.23.15 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.23.15 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-z2xsbw/k8s-upgrade-and-conformance-viu2kk-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-z2xsbw/k8s-upgrade-and-conformance-viu2kk-mp-0 to be upgraded from v1.22.17 to v1.23.15 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.23.15 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--num-nodes=4" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true" "-ginkgo.v=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1673276546�[0m - Will randomize all specs Will run �[1m7052�[0m specs Running in parallel across �[1m4�[0m nodes Jan 9 15:02:30.032: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:02:30.033: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 9 15:02:30.048: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 9 15:02:30.089: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 9 15:02:30.089: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 9 15:02:30.089: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 9 15:02:30.095: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 9 15:02:30.095: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 9 15:02:30.095: INFO: e2e test version: v1.23.15 Jan 9 15:02:30.097: INFO: kube-apiserver version: v1.23.15 Jan 9 15:02:30.098: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:02:30.103: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 9 15:02:30.116: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:02:30.132: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 9 15:02:30.140: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:02:30.155: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 9 15:02:30.144: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:02:30.162: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:30.202: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename server-version W0109 15:02:30.243961 15 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 9 15:02:30.244: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Request ServerVersion �[1mSTEP�[0m: Confirm major version Jan 9 15:02:30.258: INFO: Major version: 1 �[1mSTEP�[0m: Confirm minor version Jan 9 15:02:30.258: INFO: cleanMinorVersion: 23 Jan 9 15:02:30.258: INFO: Minor version: 23 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:30.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "server-version-5375" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":1,"skipped":17,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:30.164: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods W0109 15:02:30.204662 16 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 9 15:02:30.204: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating pod Jan 9 15:02:30.232: INFO: The status of Pod pod-hostip-35f963b2-03b0-4fe6-ab66-fcd7cc5319ef is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:02:32.240: INFO: The status of Pod pod-hostip-35f963b2-03b0-4fe6-ab66-fcd7cc5319ef is Running (Ready = true) Jan 9 15:02:32.249: INFO: Pod pod-hostip-35f963b2-03b0-4fe6-ab66-fcd7cc5319ef has hostIP: 172.18.0.5 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:32.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-7944" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":11,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:32.286: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:32.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-4355" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":2,"skipped":23,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:30.292: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication Jan 9 15:02:31.043: INFO: role binding webhook-auth-reader already exists �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 9 15:02:31.066: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 9 15:02:33.091: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 9, 15, 2, 31, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 2, 31, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 9, 15, 2, 31, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 2, 31, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 15:02:35.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 9, 15, 2, 31, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 2, 31, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 9, 15, 2, 31, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 2, 31, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 9 15:02:38.107: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the crd webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource definition that should be denied by the webhook Jan 9 15:02:38.291: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:38.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7044" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7044-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:30.194: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi W0109 15:02:30.224970 18 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 9 15:02:30.225: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:02:30.240: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with known and required properties Jan 9 15:02:32.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 --namespace=crd-publish-openapi-8292 create -f -' Jan 9 15:02:34.313: INFO: stderr: "" Jan 9 15:02:34.313: INFO: stdout: "e2e-test-crd-publish-openapi-2662-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 9 15:02:34.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 --namespace=crd-publish-openapi-8292 delete e2e-test-crd-publish-openapi-2662-crds test-foo' Jan 9 15:02:34.434: INFO: stderr: "" Jan 9 15:02:34.434: INFO: stdout: "e2e-test-crd-publish-openapi-2662-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Jan 9 15:02:34.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 --namespace=crd-publish-openapi-8292 apply -f -' Jan 9 15:02:34.690: INFO: stderr: "" Jan 9 15:02:34.690: INFO: stdout: "e2e-test-crd-publish-openapi-2662-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Jan 9 15:02:34.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 --namespace=crd-publish-openapi-8292 delete e2e-test-crd-publish-openapi-2662-crds test-foo' Jan 9 15:02:34.793: INFO: stderr: "" Jan 9 15:02:34.793: INFO: stdout: "e2e-test-crd-publish-openapi-2662-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with value outside defined enum values Jan 9 15:02:34.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 --namespace=crd-publish-openapi-8292 create -f -' Jan 9 15:02:35.009: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Jan 9 15:02:35.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 --namespace=crd-publish-openapi-8292 create -f -' Jan 9 15:02:35.222: INFO: rc: 1 Jan 9 15:02:35.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 --namespace=crd-publish-openapi-8292 apply -f -' Jan 9 15:02:35.469: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request without required properties Jan 9 15:02:35.469: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 --namespace=crd-publish-openapi-8292 create -f -' Jan 9 15:02:35.707: INFO: rc: 1 Jan 9 15:02:35.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 --namespace=crd-publish-openapi-8292 apply -f -' Jan 9 15:02:35.915: INFO: rc: 1 �[1mSTEP�[0m: kubectl explain works to explain CR properties Jan 9 15:02:35.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 explain e2e-test-crd-publish-openapi-2662-crds' Jan 9 15:02:36.131: INFO: stderr: "" Jan 9 15:02:36.131: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" �[1mSTEP�[0m: kubectl explain works to explain CR properties recursively Jan 9 15:02:36.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 explain e2e-test-crd-publish-openapi-2662-crds.metadata' Jan 9 15:02:36.319: INFO: stderr: "" Jan 9 15:02:36.319: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Jan 9 15:02:36.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 explain e2e-test-crd-publish-openapi-2662-crds.spec' Jan 9 15:02:36.500: INFO: stderr: "" Jan 9 15:02:36.501: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Jan 9 15:02:36.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 explain e2e-test-crd-publish-openapi-2662-crds.spec.bars' Jan 9 15:02:36.684: INFO: stderr: "" Jan 9 15:02:36.684: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2662-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n" �[1mSTEP�[0m: kubectl explain works to return error when explain is called on property that doesn't exist Jan 9 15:02:36.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-8292 explain e2e-test-crd-publish-openapi-2662-crds.spec.bars2' Jan 9 15:02:36.845: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:40.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-8292" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":1,"skipped":9,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:32.478: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 9 15:02:32.549: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:02:34.556: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:02:36.560: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:02:38.559: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 9 15:02:38.576: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:02:40.581: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 9 15:02:40.620: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 15:02:40.626: INFO: Pod pod-with-poststart-exec-hook still exists Jan 9 15:02:42.627: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 9 15:02:42.631: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:42.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-6058" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":49,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:38.521: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 9 15:02:38.572: INFO: Waiting up to 5m0s for pod "pod-90c0ed02-c165-4f74-b212-ac570bc1477d" in namespace "emptydir-4746" to be "Succeeded or Failed" Jan 9 15:02:38.580: INFO: Pod "pod-90c0ed02-c165-4f74-b212-ac570bc1477d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.522255ms Jan 9 15:02:40.583: INFO: Pod "pod-90c0ed02-c165-4f74-b212-ac570bc1477d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010982957s Jan 9 15:02:42.587: INFO: Pod "pod-90c0ed02-c165-4f74-b212-ac570bc1477d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01522726s Jan 9 15:02:44.592: INFO: Pod "pod-90c0ed02-c165-4f74-b212-ac570bc1477d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.020163883s �[1mSTEP�[0m: Saw pod success Jan 9 15:02:44.592: INFO: Pod "pod-90c0ed02-c165-4f74-b212-ac570bc1477d" satisfied condition "Succeeded or Failed" Jan 9 15:02:44.595: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv pod pod-90c0ed02-c165-4f74-b212-ac570bc1477d container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:02:44.621: INFO: Waiting for pod pod-90c0ed02-c165-4f74-b212-ac570bc1477d to disappear Jan 9 15:02:44.624: INFO: Pod pod-90c0ed02-c165-4f74-b212-ac570bc1477d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:44.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-4746" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":33,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:40.838: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 9 15:02:40.876: INFO: The status of Pod labelsupdate6ce12dca-3e14-46e0-aab0-9248b04a49fa is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:02:42.881: INFO: The status of Pod labelsupdate6ce12dca-3e14-46e0-aab0-9248b04a49fa is Running (Ready = true) Jan 9 15:02:43.419: INFO: Successfully updated pod "labelsupdate6ce12dca-3e14-46e0-aab0-9248b04a49fa" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:45.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4643" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":14,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:42.691: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Jan 9 15:02:42.726: INFO: Waiting up to 5m0s for pod "pod-55233d16-855c-44d1-b400-d82f4ef88d63" in namespace "emptydir-183" to be "Succeeded or Failed" Jan 9 15:02:42.734: INFO: Pod "pod-55233d16-855c-44d1-b400-d82f4ef88d63": Phase="Pending", Reason="", readiness=false. Elapsed: 7.940214ms Jan 9 15:02:44.739: INFO: Pod "pod-55233d16-855c-44d1-b400-d82f4ef88d63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012292814s Jan 9 15:02:46.743: INFO: Pod "pod-55233d16-855c-44d1-b400-d82f4ef88d63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016828139s �[1mSTEP�[0m: Saw pod success Jan 9 15:02:46.743: INFO: Pod "pod-55233d16-855c-44d1-b400-d82f4ef88d63" satisfied condition "Succeeded or Failed" Jan 9 15:02:46.747: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv pod pod-55233d16-855c-44d1-b400-d82f4ef88d63 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:02:46.767: INFO: Waiting for pod pod-55233d16-855c-44d1-b400-d82f4ef88d63 to disappear Jan 9 15:02:46.772: INFO: Pod pod-55233d16-855c-44d1-b400-d82f4ef88d63 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:46.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-183" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":75,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:45.635: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: setting up watch �[1mSTEP�[0m: submitting the pod to kubernetes Jan 9 15:02:45.675: INFO: observed the pod list �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: verifying pod creation was observed �[1mSTEP�[0m: deleting the pod gracefully �[1mSTEP�[0m: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:51.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-811" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":3,"skipped":103,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:46.906: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 9 15:02:47.628: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 9 15:02:50.656: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:02:50.664: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: Create a v2 custom resource �[1mSTEP�[0m: List CRs in v1 �[1mSTEP�[0m: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:53.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-7230" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":-1,"completed":5,"skipped":144,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:44.659: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for all pods to be garbage collected �[1mSTEP�[0m: Gathering metrics Jan 9 15:02:54.763: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-viu2kk-9mv29-nxqn7 is Running (Ready = true) Jan 9 15:02:54.824: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:02:54.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-7246" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":4,"skipped":52,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:54.905: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a watch on configmaps with label A �[1mSTEP�[0m: creating a watch on configmaps with label B �[1mSTEP�[0m: creating a watch on configmaps with label A or B �[1mSTEP�[0m: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 9 15:02:54.943: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 e18074a3-6dc6-4725-b9d1-3096233c4d2d 2427 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 9 15:02:54.943: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 e18074a3-6dc6-4725-b9d1-3096233c4d2d 2427 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A and ensuring the correct watchers observe the notification Jan 9 15:02:54.952: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 e18074a3-6dc6-4725-b9d1-3096233c4d2d 2428 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 9 15:02:54.952: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 e18074a3-6dc6-4725-b9d1-3096233c4d2d 2428 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A again and ensuring the correct watchers observe the notification Jan 9 15:02:54.959: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 e18074a3-6dc6-4725-b9d1-3096233c4d2d 2429 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 9 15:02:54.959: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 e18074a3-6dc6-4725-b9d1-3096233c4d2d 2429 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap A and ensuring the correct watchers observe the notification Jan 9 15:02:54.964: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 e18074a3-6dc6-4725-b9d1-3096233c4d2d 2430 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 9 15:02:54.965: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-943 e18074a3-6dc6-4725-b9d1-3096233c4d2d 2430 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 9 15:02:54.969: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-943 813aaa8c-96a6-453d-9bfa-d68c713c26e3 2431 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 9 15:02:54.969: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-943 813aaa8c-96a6-453d-9bfa-d68c713c26e3 2431 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap B and ensuring the correct watchers observe the notification Jan 9 15:03:04.977: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-943 813aaa8c-96a6-453d-9bfa-d68c713c26e3 2533 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 9 15:03:04.977: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-943 813aaa8c-96a6-453d-9bfa-d68c713c26e3 2533 0 2023-01-09 15:02:54 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2023-01-09 15:02:54 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:14.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-943" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":5,"skipped":95,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:54.024: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status captures configMap creation �[1mSTEP�[0m: Deleting a ConfigMap �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:22.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-4485" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:15.065: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Jan 9 15:03:15.098: INFO: namespace kubectl-8439 Jan 9 15:03:15.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8439 create -f -' Jan 9 15:03:16.049: INFO: stderr: "" Jan 9 15:03:16.049: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 9 15:03:17.053: INFO: Selector matched 1 pods for map[app:agnhost] Jan 9 15:03:17.053: INFO: Found 0 / 1 Jan 9 15:03:18.054: INFO: Selector matched 1 pods for map[app:agnhost] Jan 9 15:03:18.054: INFO: Found 1 / 1 Jan 9 15:03:18.054: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 9 15:03:18.058: INFO: Selector matched 1 pods for map[app:agnhost] Jan 9 15:03:18.058: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 9 15:03:18.058: INFO: wait on agnhost-primary startup in kubectl-8439 Jan 9 15:03:18.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8439 logs agnhost-primary-jmzxq agnhost-primary' Jan 9 15:03:18.137: INFO: stderr: "" Jan 9 15:03:18.137: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Jan 9 15:03:18.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8439 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 9 15:03:18.230: INFO: stderr: "" Jan 9 15:03:18.230: INFO: stdout: "service/rm2 exposed\n" Jan 9 15:03:18.239: INFO: Service rm2 in namespace kubectl-8439 found. �[1mSTEP�[0m: exposing service Jan 9 15:03:20.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8439 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 9 15:03:20.345: INFO: stderr: "" Jan 9 15:03:20.345: INFO: stdout: "service/rm3 exposed\n" Jan 9 15:03:20.351: INFO: Service rm3 in namespace kubectl-8439 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:22.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8439" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":6,"skipped":142,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":-1,"completed":6,"skipped":168,"failed":0} [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:22.116: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:03:22.138: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:22.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-7071" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":-1,"completed":7,"skipped":168,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:22.398: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for all rs to be garbage collected �[1mSTEP�[0m: expected 0 rs, got 1 rs �[1mSTEP�[0m: expected 0 pods, got 2 pods �[1mSTEP�[0m: Gathering metrics Jan 9 15:03:23.492: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-viu2kk-9mv29-nxqn7 is Running (Ready = true) Jan 9 15:03:23.562: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:23.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-6210" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:22.723: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-2e4ae2a6-93ea-4d47-bb4d-f17a90a6537c �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 9 15:03:22.768: INFO: Waiting up to 5m0s for pod "pod-configmaps-a8892644-24e3-461b-a3a1-dad16d5e624d" in namespace "configmap-3418" to be "Succeeded or Failed" Jan 9 15:03:22.773: INFO: Pod "pod-configmaps-a8892644-24e3-461b-a3a1-dad16d5e624d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.363293ms Jan 9 15:03:24.784: INFO: Pod "pod-configmaps-a8892644-24e3-461b-a3a1-dad16d5e624d": Phase="Running", Reason="", readiness=false. Elapsed: 2.015251043s Jan 9 15:03:26.790: INFO: Pod "pod-configmaps-a8892644-24e3-461b-a3a1-dad16d5e624d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021910983s �[1mSTEP�[0m: Saw pod success Jan 9 15:03:26.790: INFO: Pod "pod-configmaps-a8892644-24e3-461b-a3a1-dad16d5e624d" satisfied condition "Succeeded or Failed" Jan 9 15:03:26.795: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv pod pod-configmaps-a8892644-24e3-461b-a3a1-dad16d5e624d container configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:03:26.819: INFO: Waiting for pod pod-configmaps-a8892644-24e3-461b-a3a1-dad16d5e624d to disappear Jan 9 15:03:26.822: INFO: Pod pod-configmaps-a8892644-24e3-461b-a3a1-dad16d5e624d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:26.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-3418" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":177,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:30.203: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook W0109 15:02:30.238289 20 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 9 15:02:30.238: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 9 15:02:30.822: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 9 15:02:32.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 9, 15, 2, 30, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 2, 30, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 9, 15, 2, 30, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 2, 30, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 15:02:34.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 9, 15, 2, 30, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 2, 30, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 9, 15, 2, 30, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 2, 30, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 9 15:02:37.856: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:02:37.859: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API Jan 9 15:02:48.436: INFO: Waiting for webhook configuration to be ready... Jan 9 15:02:58.550: INFO: Waiting for webhook configuration to be ready... Jan 9 15:03:08.651: INFO: Waiting for webhook configuration to be ready... Jan 9 15:03:18.751: INFO: Waiting for webhook configuration to be ready... Jan 9 15:03:28.765: INFO: Waiting for webhook configuration to be ready... Jan 9 15:03:28.765: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForCustomResource(0xc0006cd080, {0xc001f3bb40, 0xc}, 0xc003f86d70, 0xc0026c0e80, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 +0x7ea k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:224 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00010ba00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:29.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5817" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5817-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [59.175 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny custom resource creation, update and deletion [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:03:28.765: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:26.884: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should update/patch PodDisruptionBudget status [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Updating PodDisruptionBudget status �[1mSTEP�[0m: Waiting for all pods to be running Jan 9 15:03:28.994: INFO: running pods: 0 < 1 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Patching PodDisruptionBudget status �[1mSTEP�[0m: Waiting for the pdb to be processed [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:31.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-3959" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":-1,"completed":9,"skipped":192,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:31.088: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should delete old replica sets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:03:31.121: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jan 9 15:03:36.129: INFO: Pod name cleanup-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 9 15:03:36.129: INFO: Creating deployment test-cleanup-deployment �[1mSTEP�[0m: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 9 15:03:36.151: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-6505 9c2a913f-73f8-4e70-9129-5ec881b73c5e 2931 1 2023-01-09 15:03:36 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2023-01-09 15:03:36 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004e32c18 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jan 9 15:03:36.157: INFO: New ReplicaSet "test-cleanup-deployment-5dbdbf94dc" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-5dbdbf94dc deployment-6505 1b1043be-4b41-4810-98c9-b08f2e823224 2935 1 2023-01-09 15:03:36 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:5dbdbf94dc] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 9c2a913f-73f8-4e70-9129-5ec881b73c5e 0xc004d54127 0xc004d54128}] [] [{kube-controller-manager Update apps/v1 2023-01-09 15:03:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9c2a913f-73f8-4e70-9129-5ec881b73c5e\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 5dbdbf94dc,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod-template-hash:5dbdbf94dc] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004d541b8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 9 15:03:36.158: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jan 9 15:03:36.158: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-6505 2562ddf6-d4d5-4b14-9202-d0ed997d12a6 2933 1 2023-01-09 15:03:31 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 9c2a913f-73f8-4e70-9129-5ec881b73c5e 0xc004c57fff 0xc004d54010}] [] [{e2e.test Update apps/v1 2023-01-09 15:03:31 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-09 15:03:32 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status} {kube-controller-manager Update apps/v1 2023-01-09 15:03:36 +0000 UTC FieldsV1 {"f:metadata":{"f:ownerReferences":{".":{},"k:{\"uid\":\"9c2a913f-73f8-4e70-9129-5ec881b73c5e\"}":{}}}} }]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004d540c8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 9 15:03:36.165: INFO: Pod "test-cleanup-controller-kjfql" is available: &Pod{ObjectMeta:{test-cleanup-controller-kjfql test-cleanup-controller- deployment-6505 b94e771c-1d7f-4fab-b33c-c618de1fe922 2880 0 2023-01-09 15:03:31 +0000 UTC <nil> <nil> map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 2562ddf6-d4d5-4b14-9202-d0ed997d12a6 0xc004d5472f 0xc004d54740}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:31 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2562ddf6-d4d5-4b14-9202-d0ed997d12a6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:32 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vxl5n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxl5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.1.12,StartTime:2023-01-09 15:03:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-09 15:03:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://63d08ffe87bd4adacb63f953a86211bee9be0c612f2cf49de1d3db987d66f3ac,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:36.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-6505" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":-1,"completed":10,"skipped":216,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":0,"skipped":41,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:29.384: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 9 15:03:30.193: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 9 15:03:33.219: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:03:33.223: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be denied by the webhook �[1mSTEP�[0m: Creating a custom resource whose deletion would be denied by the webhook �[1mSTEP�[0m: Updating the custom resource with disallowed data should be denied �[1mSTEP�[0m: Deleting the custom resource should be denied �[1mSTEP�[0m: Remove the offending key and value from the custom resource data �[1mSTEP�[0m: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:36.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1515" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1515-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":1,"skipped":41,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:36.435: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:03:36.526: INFO: The status of Pod busybox-scheduling-2c5845eb-95c0-41c7-95d1-6340be02959e is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:03:38.532: INFO: The status of Pod busybox-scheduling-2c5845eb-95c0-41c7-95d1-6340be02959e is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:03:40.532: INFO: The status of Pod busybox-scheduling-2c5845eb-95c0-41c7-95d1-6340be02959e is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:40.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-8705" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":45,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:36.220: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pods Jan 9 15:03:36.254: INFO: created test-pod-1 Jan 9 15:03:36.264: INFO: created test-pod-2 Jan 9 15:03:36.269: INFO: created test-pod-3 �[1mSTEP�[0m: waiting for all 3 pods to be running Jan 9 15:03:36.269: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-5570' to be running and ready Jan 9 15:03:36.315: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 9 15:03:36.315: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 9 15:03:36.315: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 9 15:03:36.315: INFO: 0 / 3 pods in namespace 'pods-5570' are running and ready (0 seconds elapsed) Jan 9 15:03:36.315: INFO: expected 0 pod replicas in namespace 'pods-5570', 0 are Running and Ready. Jan 9 15:03:36.315: INFO: POD NODE PHASE GRACE CONDITIONS Jan 9 15:03:36.315: INFO: test-pod-1 k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC }] Jan 9 15:03:36.315: INFO: test-pod-2 k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC }] Jan 9 15:03:36.315: INFO: test-pod-3 k8s-upgrade-and-conformance-viu2kk-worker-1r6syi Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-09 15:03:36 +0000 UTC }] Jan 9 15:03:36.315: INFO: Jan 9 15:03:38.326: INFO: 3 / 3 pods in namespace 'pods-5570' are running and ready (2 seconds elapsed) Jan 9 15:03:38.326: INFO: expected 0 pod replicas in namespace 'pods-5570', 0 are Running and Ready. �[1mSTEP�[0m: waiting for all pods to be deleted Jan 9 15:03:38.343: INFO: Pod quantity 3 is different from expected quantity 0 Jan 9 15:03:39.349: INFO: Pod quantity 3 is different from expected quantity 0 Jan 9 15:03:40.348: INFO: Pod quantity 2 is different from expected quantity 0 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:41.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5570" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":11,"skipped":234,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:40.612: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 9 15:03:48.681: INFO: DNS probes using dns-7612/dns-test-c1251b31-db57-4677-bdea-c5d42a1f5b7f succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:48.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-7612" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":3,"skipped":64,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:48.713: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:03:48.734: INFO: Creating deployment "webserver-deployment" Jan 9 15:03:48.740: INFO: Waiting for observed generation 1 Jan 9 15:03:50.765: INFO: Waiting for all required pods to come up Jan 9 15:03:50.781: INFO: Pod name httpd: Found 10 pods out of 10 �[1mSTEP�[0m: ensuring each pod is running Jan 9 15:03:56.808: INFO: Waiting for deployment "webserver-deployment" to complete Jan 9 15:03:56.818: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 9 15:03:56.831: INFO: Updating deployment webserver-deployment Jan 9 15:03:56.831: INFO: Waiting for observed generation 2 Jan 9 15:03:58.840: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 9 15:03:58.843: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 9 15:03:58.846: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 9 15:03:58.854: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 9 15:03:58.854: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 9 15:03:58.857: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 9 15:03:58.863: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 9 15:03:58.863: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 9 15:03:58.872: INFO: Updating deployment webserver-deployment Jan 9 15:03:58.872: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 9 15:03:58.879: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 9 15:03:58.882: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 9 15:03:58.904: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-3961 39996364-d939-41d1-b984-bf5df788bc04 3489 3 2023-01-09 15:03:48 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-09 15:03:48 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-09 15:03:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004b99238 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:5,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-566f96c878" is progressing.,LastUpdateTime:2023-01-09 15:03:56 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-09 15:03:58 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Jan 9 15:03:58.924: INFO: New ReplicaSet "webserver-deployment-566f96c878" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-566f96c878 deployment-3961 2695fd81-ef22-436f-83c9-4ec57ac8bfd3 3483 3 2023-01-09 15:03:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 39996364-d939-41d1-b984-bf5df788bc04 0xc004549e07 0xc004549e08}] [] [{kube-controller-manager Update apps/v1 2023-01-09 15:03:56 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39996364-d939-41d1-b984-bf5df788bc04\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-09 15:03:56 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 566f96c878,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004549ea8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 9 15:03:58.924: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 9 15:03:58.924: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-5d9fdcc779 deployment-3961 c7be4a63-665e-44cd-9a2b-a4589814b048 3480 3 2023-01-09 15:03:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 39996364-d939-41d1-b984-bf5df788bc04 0xc004549f07 0xc004549f08}] [] [{kube-controller-manager Update apps/v1 2023-01-09 15:03:48 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"39996364-d939-41d1-b984-bf5df788bc04\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-09 15:03:50 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004549f98 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Jan 9 15:03:58.966: INFO: Pod "webserver-deployment-566f96c878-49rxb" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-49rxb webserver-deployment-566f96c878- deployment-3961 671777fa-d5aa-4ae9-acaf-b6d9b7b0b878 3416 0 2023-01-09 15:03:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 2695fd81-ef22-436f-83c9-4ec57ac8bfd3 0xc004ce6420 0xc004ce6421}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2695fd81-ef22-436f-83c9-4ec57ac8bfd3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bd5jl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bd5jl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-1r6syi,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-09 15:03:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.966: INFO: Pod "webserver-deployment-566f96c878-72pkz" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-72pkz webserver-deployment-566f96c878- deployment-3961 2b4b9797-a18a-4a51-aa70-f7c16fabc426 3512 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 2695fd81-ef22-436f-83c9-4ec57ac8bfd3 0xc004ce65f0 0xc004ce65f1}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2695fd81-ef22-436f-83c9-4ec57ac8bfd3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-drfnz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-drfnz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.967: INFO: Pod "webserver-deployment-566f96c878-7ln9x" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-7ln9x webserver-deployment-566f96c878- deployment-3961 1c699711-4086-4c23-b627-41a9e59c3191 3466 0 2023-01-09 15:03:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 2695fd81-ef22-436f-83c9-4ec57ac8bfd3 0xc004ce6750 0xc004ce6751}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2695fd81-ef22-436f-83c9-4ec57ac8bfd3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dmgnm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dmgnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.12,StartTime:2023-01-09 15:03:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.967: INFO: Pod "webserver-deployment-566f96c878-7nqxl" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-7nqxl webserver-deployment-566f96c878- deployment-3961 fb31ac90-d3ef-45c6-9a15-59a2a296181b 3507 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 2695fd81-ef22-436f-83c9-4ec57ac8bfd3 0xc004ce6950 0xc004ce6951}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2695fd81-ef22-436f-83c9-4ec57ac8bfd3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wjtbj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wjtbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-1r6syi,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.967: INFO: Pod "webserver-deployment-566f96c878-9qg5g" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-9qg5g webserver-deployment-566f96c878- deployment-3961 bc94c797-bf33-47ff-8abe-8fa213b9b7a4 3475 0 2023-01-09 15:03:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 2695fd81-ef22-436f-83c9-4ec57ac8bfd3 0xc004ce6ab0 0xc004ce6ab1}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2695fd81-ef22-436f-83c9-4ec57ac8bfd3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.17\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-phvl6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-phvl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.1.17,StartTime:2023-01-09 15:03:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.17,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.968: INFO: Pod "webserver-deployment-566f96c878-ctrnj" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-ctrnj webserver-deployment-566f96c878- deployment-3961 7508eb0c-4723-40cd-b377-0e6dac54f80c 3478 0 2023-01-09 15:03:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 2695fd81-ef22-436f-83c9-4ec57ac8bfd3 0xc004ce6cb0 0xc004ce6cb1}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2695fd81-ef22-436f-83c9-4ec57ac8bfd3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.14\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zbvxk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zbvxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.14,StartTime:2023-01-09 15:03:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.968: INFO: Pod "webserver-deployment-566f96c878-fj8c2" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-fj8c2 webserver-deployment-566f96c878- deployment-3961 36daeb40-92ba-4c2b-9b29-e415215eca30 3516 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 2695fd81-ef22-436f-83c9-4ec57ac8bfd3 0xc004ce6eb0 0xc004ce6eb1}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2695fd81-ef22-436f-83c9-4ec57ac8bfd3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cq9t5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cq9t5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.968: INFO: Pod "webserver-deployment-566f96c878-ljj7k" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-ljj7k webserver-deployment-566f96c878- deployment-3961 d05d7115-a464-42da-8189-a3685ba16c46 3495 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 2695fd81-ef22-436f-83c9-4ec57ac8bfd3 0xc004ce6ff7 0xc004ce6ff8}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2695fd81-ef22-436f-83c9-4ec57ac8bfd3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zd9wf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zd9wf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.969: INFO: Pod "webserver-deployment-566f96c878-xpqnh" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-xpqnh webserver-deployment-566f96c878- deployment-3961 8de2dbdf-094e-4bc8-aa8a-00dd005a5a86 3471 0 2023-01-09 15:03:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 2695fd81-ef22-436f-83c9-4ec57ac8bfd3 0xc004ce7160 0xc004ce7161}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2695fd81-ef22-436f-83c9-4ec57ac8bfd3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bbq5d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bbq5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.1.18,StartTime:2023-01-09 15:03:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.969: INFO: Pod "webserver-deployment-566f96c878-xrp7f" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-xrp7f webserver-deployment-566f96c878- deployment-3961 24e7139a-d2b1-4ec3-8abf-7963a8ce5ece 3511 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 2695fd81-ef22-436f-83c9-4ec57ac8bfd3 0xc004ce7360 0xc004ce7361}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2695fd81-ef22-436f-83c9-4ec57ac8bfd3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-w9nsl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w9nsl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.969: INFO: Pod "webserver-deployment-5d9fdcc779-266mq" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-266mq webserver-deployment-5d9fdcc779- deployment-3961 d00b4d3f-b49f-4a6e-bb46-cf7b21bbf638 3342 0 2023-01-09 15:03:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004ce74a7 0xc004ce74a8}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.11\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9472s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9472s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.11,StartTime:2023-01-09 15:03:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-09 15:03:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://852e907aee2a947eb2cdd49479c81c99f39b81dd7b3bab3399ebba52938d5ce4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.969: INFO: Pod "webserver-deployment-5d9fdcc779-57zwh" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-57zwh webserver-deployment-5d9fdcc779- deployment-3961 55f05a7b-1ed8-4d65-b938-9b3ee6ec51b4 3510 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004ce7680 0xc004ce7681}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-t7hwz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-t7hwz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-1r6syi,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.970: INFO: Pod "webserver-deployment-5d9fdcc779-5grkp" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-5grkp webserver-deployment-5d9fdcc779- deployment-3961 4122f393-f9c2-4ef7-88b6-7d89a5635f1c 3335 0 2023-01-09 15:03:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004ce77d0 0xc004ce77d1}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b5nv6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b5nv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.10,StartTime:2023-01-09 15:03:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-09 15:03:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://5dcf5b8d071113ee45b0e3f38ad9e467b535f1992a95907fc29619370537df7f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.970: INFO: Pod "webserver-deployment-5d9fdcc779-8hs66" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-8hs66 webserver-deployment-5d9fdcc779- deployment-3961 eae80cda-09ea-40d6-b9ee-278d742d448a 3514 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004ce79a0 0xc004ce79a1}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5nszf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5nszf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.970: INFO: Pod "webserver-deployment-5d9fdcc779-97r7k" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-97r7k webserver-deployment-5d9fdcc779- deployment-3961 e577e06a-72fe-40ae-8fb6-e904c92cbe6f 3494 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004ce7ad7 0xc004ce7ad8}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v8tw2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v8tw2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.971: INFO: Pod "webserver-deployment-5d9fdcc779-9ln4h" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-9ln4h webserver-deployment-5d9fdcc779- deployment-3961 7474b5ef-2a81-4d8a-b06f-e193d832d064 3517 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004ce7c30 0xc004ce7c31}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-snxtv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-snxtv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.971: INFO: Pod "webserver-deployment-5d9fdcc779-cpzns" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-cpzns webserver-deployment-5d9fdcc779- deployment-3961 c22ee233-7c3e-4289-8195-c3bcff768972 3508 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004ce7d67 0xc004ce7d68}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rd9cq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rd9cq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-09 15:03:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.981: INFO: Pod "webserver-deployment-5d9fdcc779-fjnw9" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-fjnw9 webserver-deployment-5d9fdcc779- deployment-3961 f46839ad-96f8-4ad3-8ad4-c5bdf0ce4a4e 3381 0 2023-01-09 15:03:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004ce7f20 0xc004ce7f21}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.12\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jrk5j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jrk5j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.12,StartTime:2023-01-09 15:03:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-09 15:03:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://604bdcf544155f9b4bf241ddc147e520c2aba80254a6700495c9a2231c516a73,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.981: INFO: Pod "webserver-deployment-5d9fdcc779-fn9v8" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-fn9v8 webserver-deployment-5d9fdcc779- deployment-3961 7a4f26cc-756c-4cec-9fa2-51d9c2c86ead 3501 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004e1c0f0 0xc004e1c0f1}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gccp6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gccp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-1r6syi,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.982: INFO: Pod "webserver-deployment-5d9fdcc779-kt45t" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-kt45t webserver-deployment-5d9fdcc779- deployment-3961 42cb3a78-7acf-490f-b871-bc7eec524139 3360 0 2023-01-09 15:03:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004e1c240 0xc004e1c241}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.11\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wwgdq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wwgdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-1r6syi,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.11,StartTime:2023-01-09 15:03:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-09 15:03:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://f7d911f2b91470a84028f3f8fcb5be2fae3604dc236ddd68495a55f546c10d22,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.982: INFO: Pod "webserver-deployment-5d9fdcc779-mrmp2" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-mrmp2 webserver-deployment-5d9fdcc779- deployment-3961 5f2cc1d8-fc7c-4bad-9711-e4aceddd5877 3358 0 2023-01-09 15:03:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004e1c410 0xc004e1c411}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v2v7k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v2v7k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-1r6syi,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.6.10,StartTime:2023-01-09 15:03:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-09 15:03:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://3408f1a329fcde1dc284503c066358d4de860046299be21711712cb46be09aa8,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.982: INFO: Pod "webserver-deployment-5d9fdcc779-mzbdf" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-mzbdf webserver-deployment-5d9fdcc779- deployment-3961 0d5e8ebc-5514-400a-83cc-6b491a800842 3513 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004e1c5e0 0xc004e1c5e1}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-thnrv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-thnrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.982: INFO: Pod "webserver-deployment-5d9fdcc779-n2qrg" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-n2qrg webserver-deployment-5d9fdcc779- deployment-3961 6d3126d3-f1a8-4ed3-8d06-986b9ecb8272 3515 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004e1c730 0xc004e1c731}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-cw8lz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-cw8lz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.982: INFO: Pod "webserver-deployment-5d9fdcc779-nh7c2" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-nh7c2 webserver-deployment-5d9fdcc779- deployment-3961 f3034d72-163d-47e3-b50b-43cdb6da3d58 3367 0 2023-01-09 15:03:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004e1c867 0xc004e1c868}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.9\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sdtvl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sdtvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.9,StartTime:2023-01-09 15:03:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-09 15:03:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://4e02ef3877196e8600059a2d12ccfbe863032905d9871a9efcf1eb301ca4ae24,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.983: INFO: Pod "webserver-deployment-5d9fdcc779-qlkjh" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-qlkjh webserver-deployment-5d9fdcc779- deployment-3961 ca53b5aa-47d3-4574-a902-c9ccb2cc8336 3283 0 2023-01-09 15:03:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004e1ca40 0xc004e1ca41}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:50 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.16\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h677c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h677c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.1.16,StartTime:2023-01-09 15:03:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-09 15:03:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://db7318f8123459869314d147aecfaa151e4db1e9ed3898a2bec29df450c7f927,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.983: INFO: Pod "webserver-deployment-5d9fdcc779-sw4h5" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-sw4h5 webserver-deployment-5d9fdcc779- deployment-3961 a8d10125-c04f-47e2-8e91-e5863cf3f98f 3345 0 2023-01-09 15:03:48 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004e1cc10 0xc004e1cc11}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:48 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-09 15:03:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8d8mm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8d8mm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:48 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.10,StartTime:2023-01-09 15:03:48 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-09 15:03:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://3333880210eb5021c4b12b92d7b7eefdb3351873069b08d91f3045d788e5c32f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.983: INFO: Pod "webserver-deployment-5d9fdcc779-w6gcd" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-w6gcd webserver-deployment-5d9fdcc779- deployment-3961 f9641921-78a5-4ec6-94e0-ddc0aead7c59 3509 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004e1cde0 0xc004e1cde1}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jvb9m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jvb9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 9 15:03:58.983: INFO: Pod "webserver-deployment-5d9fdcc779-wtmrc" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-wtmrc webserver-deployment-5d9fdcc779- deployment-3961 3e36a5b4-f36a-47e7-9159-2a1b375fa8e4 3493 0 2023-01-09 15:03:58 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 c7be4a63-665e-44cd-9a2b-a4589814b048 0xc004e1cf30 0xc004e1cf31}] [] [{kube-controller-manager Update v1 2023-01-09 15:03:58 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"c7be4a63-665e-44cd-9a2b-a4589814b048\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lmb86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lmb86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-09 15:03:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:03:58.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-3961" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":4,"skipped":70,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:59.066: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-7dc3956e-229a-4255-a53c-0709c2ac456e �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 9 15:03:59.139: INFO: Waiting up to 5m0s for pod "pod-configmaps-2e3c3a24-9795-4f0d-87fe-c3eb6543b0dc" in namespace "configmap-6471" to be "Succeeded or Failed" Jan 9 15:03:59.146: INFO: Pod "pod-configmaps-2e3c3a24-9795-4f0d-87fe-c3eb6543b0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356629ms Jan 9 15:04:01.155: INFO: Pod "pod-configmaps-2e3c3a24-9795-4f0d-87fe-c3eb6543b0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015291s Jan 9 15:04:03.160: INFO: Pod "pod-configmaps-2e3c3a24-9795-4f0d-87fe-c3eb6543b0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020642292s Jan 9 15:04:05.170: INFO: Pod "pod-configmaps-2e3c3a24-9795-4f0d-87fe-c3eb6543b0dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.030452209s �[1mSTEP�[0m: Saw pod success Jan 9 15:04:05.170: INFO: Pod "pod-configmaps-2e3c3a24-9795-4f0d-87fe-c3eb6543b0dc" satisfied condition "Succeeded or Failed" Jan 9 15:04:05.176: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b pod pod-configmaps-2e3c3a24-9795-4f0d-87fe-c3eb6543b0dc container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:04:05.269: INFO: Waiting for pod pod-configmaps-2e3c3a24-9795-4f0d-87fe-c3eb6543b0dc to disappear Jan 9 15:04:05.281: INFO: Pod pod-configmaps-2e3c3a24-9795-4f0d-87fe-c3eb6543b0dc no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:04:05.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6471" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":77,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:04:05.367: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on tmpfs Jan 9 15:04:05.484: INFO: Waiting up to 5m0s for pod "pod-7a07402a-daf0-42d4-8bcf-c4efe3d38b12" in namespace "emptydir-5365" to be "Succeeded or Failed" Jan 9 15:04:05.500: INFO: Pod "pod-7a07402a-daf0-42d4-8bcf-c4efe3d38b12": Phase="Pending", Reason="", readiness=false. Elapsed: 15.282184ms Jan 9 15:04:07.503: INFO: Pod "pod-7a07402a-daf0-42d4-8bcf-c4efe3d38b12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018797172s Jan 9 15:04:09.509: INFO: Pod "pod-7a07402a-daf0-42d4-8bcf-c4efe3d38b12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024636615s �[1mSTEP�[0m: Saw pod success Jan 9 15:04:09.509: INFO: Pod "pod-7a07402a-daf0-42d4-8bcf-c4efe3d38b12" satisfied condition "Succeeded or Failed" Jan 9 15:04:09.513: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv pod pod-7a07402a-daf0-42d4-8bcf-c4efe3d38b12 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:04:09.530: INFO: Waiting for pod pod-7a07402a-daf0-42d4-8bcf-c4efe3d38b12 to disappear Jan 9 15:04:09.533: INFO: Pod pod-7a07402a-daf0-42d4-8bcf-c4efe3d38b12 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:04:09.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5365" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":6,"skipped":87,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":7,"skipped":157,"failed":0} [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:23.577: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-8106 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Initializing watcher for selector baz=blah,foo=bar �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-8106 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-8106 Jan 9 15:03:23.641: INFO: Found 0 stateful pods, waiting for 1 Jan 9 15:03:33.648: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will halt with unhealthy stateful pod Jan 9 15:03:33.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8106 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 9 15:03:33.829: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 9 15:03:33.829: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 9 15:03:33.829: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 9 15:03:33.835: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 9 15:03:43.841: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 9 15:03:43.841: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 15:03:43.857: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999568s Jan 9 15:03:44.866: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.994234945s Jan 9 15:03:45.880: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.986119676s Jan 9 15:03:46.884: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.972715022s Jan 9 15:03:47.890: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.967951275s Jan 9 15:03:48.899: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.961734843s Jan 9 15:03:49.938: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.9537063s Jan 9 15:03:50.944: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.914041508s Jan 9 15:03:51.953: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.907522472s Jan 9 15:03:52.958: INFO: Verifying statefulset ss doesn't scale past 1 for another 899.097391ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8106 Jan 9 15:03:53.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8106 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 9 15:03:54.174: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 9 15:03:54.174: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 9 15:03:54.174: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 9 15:03:54.182: INFO: Found 1 stateful pods, waiting for 3 Jan 9 15:04:04.192: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 15:04:04.192: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 15:04:04.192: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Verifying that stateful set ss was scaled up in order �[1mSTEP�[0m: Scale down will halt with unhealthy stateful pod Jan 9 15:04:04.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8106 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 9 15:04:04.387: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 9 15:04:04.387: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 9 15:04:04.388: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 9 15:04:04.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8106 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 9 15:04:04.582: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 9 15:04:04.582: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 9 15:04:04.582: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 9 15:04:04.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8106 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 9 15:04:04.783: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 9 15:04:04.784: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 9 15:04:04.784: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 9 15:04:04.784: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 15:04:04.792: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Jan 9 15:04:14.802: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 9 15:04:14.802: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 9 15:04:14.802: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 9 15:04:14.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999546s Jan 9 15:04:15.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995601449s Jan 9 15:04:16.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.981809903s Jan 9 15:04:17.840: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.976881028s Jan 9 15:04:18.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.969908201s Jan 9 15:04:19.855: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.963333158s Jan 9 15:04:20.864: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.954153198s Jan 9 15:04:21.872: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.945009631s Jan 9 15:04:22.876: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.938012574s Jan 9 15:04:23.881: INFO: Verifying statefulset ss doesn't scale past 3 for another 933.480934ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8106 Jan 9 15:04:24.887: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8106 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 9 15:04:25.030: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 9 15:04:25.030: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 9 15:04:25.030: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 9 15:04:25.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8106 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 9 15:04:25.177: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 9 15:04:25.177: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 9 15:04:25.177: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 9 15:04:25.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8106 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 9 15:04:25.331: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 9 15:04:25.331: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 9 15:04:25.331: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 9 15:04:25.331: INFO: Scaling statefulset ss to 0 �[1mSTEP�[0m: Verifying that stateful set ss was scaled down in reverse order [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 9 15:04:35.352: INFO: Deleting all statefulset in ns statefulset-8106 Jan 9 15:04:35.356: INFO: Scaling statefulset ss to 0 Jan 9 15:04:35.367: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 15:04:35.370: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:04:35.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8106" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":-1,"completed":8,"skipped":157,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:02:51.440: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-64 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-64 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-64 I0109 15:02:51.490727 18 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-64, replica count: 3 I0109 15:02:54.542684 18 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 9 15:02:54.549: INFO: Creating new exec pod Jan 9 15:02:57.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:02:59.733: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:02:59.733: INFO: stdout: "" Jan 9 15:03:00.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:02.906: INFO: stderr: "+ echo+ hostName\nnc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:02.906: INFO: stdout: "" Jan 9 15:03:03.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:05.882: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:05.882: INFO: stdout: "" Jan 9 15:03:06.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:08.925: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:08.925: INFO: stdout: "" Jan 9 15:03:09.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:11.942: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:11.943: INFO: stdout: "" Jan 9 15:03:12.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:14.889: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:14.889: INFO: stdout: "" Jan 9 15:03:15.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:17.885: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:17.885: INFO: stdout: "" Jan 9 15:03:18.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:20.878: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:20.878: INFO: stdout: "" Jan 9 15:03:21.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:23.908: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:23.908: INFO: stdout: "" Jan 9 15:03:24.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:26.951: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:26.951: INFO: stdout: "" Jan 9 15:03:27.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:30.077: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:30.077: INFO: stdout: "" Jan 9 15:03:30.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:32.944: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:32.944: INFO: stdout: "" Jan 9 15:03:33.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:35.917: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:35.917: INFO: stdout: "" Jan 9 15:03:36.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:38.961: INFO: stderr: "+ + echonc -v hostName -t\n -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:38.961: INFO: stdout: "" Jan 9 15:03:39.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:41.901: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:41.902: INFO: stdout: "" Jan 9 15:03:42.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:44.904: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:44.904: INFO: stdout: "" Jan 9 15:03:45.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:47.930: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:47.930: INFO: stdout: "" Jan 9 15:03:48.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:50.961: INFO: stderr: "+ echo+ hostName\nnc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:50.961: INFO: stdout: "" Jan 9 15:03:51.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:53.961: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:53.961: INFO: stdout: "" Jan 9 15:03:54.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:03:56.924: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:03:56.924: INFO: stdout: "" Jan 9 15:03:57.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:00.017: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:00.017: INFO: stdout: "" Jan 9 15:04:00.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:02.973: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:02.973: INFO: stdout: "" Jan 9 15:04:03.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:05.893: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:05.893: INFO: stdout: "" Jan 9 15:04:06.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:08.908: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:08.908: INFO: stdout: "" Jan 9 15:04:09.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:11.938: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:11.938: INFO: stdout: "" Jan 9 15:04:12.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:14.890: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:14.890: INFO: stdout: "" Jan 9 15:04:15.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:17.914: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:17.914: INFO: stdout: "" Jan 9 15:04:18.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:20.941: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:20.941: INFO: stdout: "" Jan 9 15:04:21.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:23.929: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:23.929: INFO: stdout: "" Jan 9 15:04:24.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:26.874: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:26.874: INFO: stdout: "" Jan 9 15:04:27.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:29.895: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:29.895: INFO: stdout: "" Jan 9 15:04:30.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:32.880: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:32.880: INFO: stdout: "" Jan 9 15:04:33.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:35.899: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:35.899: INFO: stdout: "" Jan 9 15:04:36.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:38.922: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:38.922: INFO: stdout: "" Jan 9 15:04:39.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:41.898: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:41.898: INFO: stdout: "" Jan 9 15:04:42.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:44.913: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:44.913: INFO: stdout: "" Jan 9 15:04:45.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:47.899: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:47.899: INFO: stdout: "" Jan 9 15:04:48.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:50.898: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:50.898: INFO: stdout: "" Jan 9 15:04:51.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:53.940: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:53.940: INFO: stdout: "" Jan 9 15:04:54.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:56.915: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:56.916: INFO: stdout: "" Jan 9 15:04:57.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:04:59.890: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:04:59.890: INFO: stdout: "" Jan 9 15:04:59.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-64 exec execpod-affinitycxp7t -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:02.090: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:02.091: INFO: stdout: "" Jan 9 15:05:02.091: FAIL: Unexpected error: <*errors.errorString | 0xc0042902b0>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x7101987, {0x7b06bd0, 0xc006ac0c00}, 0xc006c68000, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 +0x669 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3266 k8s.io/kubernetes/test/e2e/network.glob..func24.24() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2067 +0x8d k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000277d40, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Jan 9 15:05:02.091: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-64, will wait for the garbage collector to delete the pods Jan 9 15:05:02.170: INFO: Deleting ReplicationController affinity-clusterip took: 6.677197ms Jan 9 15:05:02.270: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.368739ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:05:04.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-64" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [133.167 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:05:02.091: Unexpected error: <*errors.errorString | 0xc0042902b0>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":106,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:05:04.610: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-415 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-415 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-415 I0109 15:05:04.655527 18 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-415, replica count: 3 I0109 15:05:07.707879 18 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 9 15:05:07.714: INFO: Creating new exec pod Jan 9 15:05:10.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:12.908: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:12.908: INFO: stdout: "" Jan 9 15:05:13.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:16.107: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:16.107: INFO: stdout: "" Jan 9 15:05:16.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:19.110: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n+ echo hostName\n" Jan 9 15:05:19.110: INFO: stdout: "" Jan 9 15:05:19.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:22.086: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:22.086: INFO: stdout: "" Jan 9 15:05:22.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:25.087: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:25.087: INFO: stdout: "" Jan 9 15:05:25.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:28.071: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:28.071: INFO: stdout: "" Jan 9 15:05:28.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:31.083: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:31.083: INFO: stdout: "" Jan 9 15:05:31.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:34.085: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:34.085: INFO: stdout: "" Jan 9 15:05:34.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:37.077: INFO: stderr: "+ + nc -v -t -wecho 2 affinity-clusterip 80 hostName\n\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:37.077: INFO: stdout: "" Jan 9 15:05:37.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:40.093: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:40.093: INFO: stdout: "" Jan 9 15:05:40.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:43.090: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:43.090: INFO: stdout: "" Jan 9 15:05:43.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:46.083: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:46.083: INFO: stdout: "" Jan 9 15:05:46.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:49.095: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:49.095: INFO: stdout: "" Jan 9 15:05:49.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:52.109: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:52.109: INFO: stdout: "" Jan 9 15:05:52.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:55.072: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:55.072: INFO: stdout: "" Jan 9 15:05:55.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:05:58.062: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:05:58.062: INFO: stdout: "" Jan 9 15:05:58.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:01.082: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:01.082: INFO: stdout: "" Jan 9 15:06:01.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:04.089: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:04.089: INFO: stdout: "" Jan 9 15:06:04.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:07.086: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:07.086: INFO: stdout: "" Jan 9 15:06:07.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:10.067: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:10.067: INFO: stdout: "" Jan 9 15:06:10.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:13.060: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:13.060: INFO: stdout: "" Jan 9 15:06:13.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:16.085: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:16.085: INFO: stdout: "" Jan 9 15:06:16.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:19.073: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:19.073: INFO: stdout: "" Jan 9 15:06:19.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:22.099: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:22.099: INFO: stdout: "" Jan 9 15:06:22.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:25.069: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:25.069: INFO: stdout: "" Jan 9 15:06:25.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:28.070: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:28.070: INFO: stdout: "" Jan 9 15:06:28.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:31.075: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:31.075: INFO: stdout: "" Jan 9 15:06:31.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:34.092: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:34.092: INFO: stdout: "" Jan 9 15:06:34.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:37.069: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:37.069: INFO: stdout: "" Jan 9 15:06:37.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:40.076: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:40.076: INFO: stdout: "" Jan 9 15:06:40.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:43.096: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:43.096: INFO: stdout: "" Jan 9 15:06:43.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:46.081: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:46.081: INFO: stdout: "" Jan 9 15:06:46.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:49.066: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:49.066: INFO: stdout: "" Jan 9 15:06:49.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:52.108: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:52.108: INFO: stdout: "" Jan 9 15:06:52.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:55.091: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:55.091: INFO: stdout: "" Jan 9 15:06:55.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:06:58.068: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:06:58.068: INFO: stdout: "" Jan 9 15:06:58.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:01.066: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:07:01.066: INFO: stdout: "" Jan 9 15:07:01.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:04.067: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:07:04.067: INFO: stdout: "" Jan 9 15:07:04.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:07.078: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:07:07.078: INFO: stdout: "" Jan 9 15:07:07.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:10.083: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:07:10.083: INFO: stdout: "" Jan 9 15:07:10.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:13.121: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:07:13.121: INFO: stdout: "" Jan 9 15:07:13.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-415 exec execpod-affinity8qp4w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:15.289: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 9 15:07:15.289: INFO: stdout: "" Jan 9 15:07:15.290: FAIL: Unexpected error: <*errors.errorString | 0xc005db4400>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x7101987, {0x7b06bd0, 0xc004bc6d80}, 0xc006c68a00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 +0x669 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3266 k8s.io/kubernetes/test/e2e/network.glob..func24.24() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2067 +0x8d k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000277d40, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Jan 9 15:07:15.290: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-415, will wait for the garbage collector to delete the pods Jan 9 15:07:15.374: INFO: Deleting ReplicationController affinity-clusterip took: 7.34258ms Jan 9 15:07:15.475: INFO: Terminating ReplicationController affinity-clusterip pods took: 101.200848ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:07:17.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-415" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [132.801 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:07:15.290: Unexpected error: <*errors.errorString | 0xc005db4400>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":106,"failed":2,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:07:17.414: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-7663 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-7663 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-7663 I0109 15:07:17.474364 18 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-7663, replica count: 3 I0109 15:07:20.526163 18 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 9 15:07:20.534: INFO: Creating new exec pod Jan 9 15:07:23.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:25.745: INFO: rc: 1 Jan 9 15:07:25.745: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:26.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:28.905: INFO: rc: 1 Jan 9 15:07:28.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:29.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:31.931: INFO: rc: 1 Jan 9 15:07:31.931: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:32.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:34.906: INFO: rc: 1 Jan 9 15:07:34.906: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:35.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:37.907: INFO: rc: 1 Jan 9 15:07:37.908: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:38.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:40.901: INFO: rc: 1 Jan 9 15:07:40.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:41.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:43.938: INFO: rc: 1 Jan 9 15:07:43.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:44.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:46.909: INFO: rc: 1 Jan 9 15:07:46.909: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:47.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:49.912: INFO: rc: 1 Jan 9 15:07:49.912: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:50.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:52.928: INFO: rc: 1 Jan 9 15:07:52.928: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:53.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:55.901: INFO: rc: 1 Jan 9 15:07:55.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:56.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:07:58.903: INFO: rc: 1 Jan 9 15:07:58.903: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:07:59.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:01.924: INFO: rc: 1 Jan 9 15:08:01.924: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:02.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:04.910: INFO: rc: 1 Jan 9 15:08:04.910: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:05.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:07.923: INFO: rc: 1 Jan 9 15:08:07.923: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:08.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:10.946: INFO: rc: 1 Jan 9 15:08:10.946: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:11.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:13.926: INFO: rc: 1 Jan 9 15:08:13.926: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:14.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:16.938: INFO: rc: 1 Jan 9 15:08:16.938: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:17.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:19.898: INFO: rc: 1 Jan 9 15:08:19.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:20.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:22.935: INFO: rc: 1 Jan 9 15:08:22.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:23.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:25.898: INFO: rc: 1 Jan 9 15:08:25.898: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:26.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:28.935: INFO: rc: 1 Jan 9 15:08:28.935: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:29.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:31.915: INFO: rc: 1 Jan 9 15:08:31.915: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:32.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:34.957: INFO: rc: 1 Jan 9 15:08:34.957: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:35.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:37.936: INFO: rc: 1 Jan 9 15:08:37.936: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:38.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:40.901: INFO: rc: 1 Jan 9 15:08:40.901: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:41.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:43.905: INFO: rc: 1 Jan 9 15:08:43.905: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:44.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:46.922: INFO: rc: 1 Jan 9 15:08:46.922: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:47.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:49.911: INFO: rc: 1 Jan 9 15:08:49.911: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:50.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:53.095: INFO: rc: 1 Jan 9 15:08:53.096: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:53.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:56.071: INFO: rc: 1 Jan 9 15:08:56.072: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:56.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:08:59.047: INFO: rc: 1 Jan 9 15:08:59.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:08:59.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:09:02.061: INFO: rc: 1 Jan 9 15:09:02.061: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:09:02.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:09:05.019: INFO: rc: 1 Jan 9 15:09:05.019: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:09:05.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:09:08.048: INFO: rc: 1 Jan 9 15:09:08.049: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:09:08.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:09:11.074: INFO: rc: 1 Jan 9 15:09:11.074: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:09:11.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:09:14.063: INFO: rc: 1 Jan 9 15:09:14.063: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:09:14.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:09:17.033: INFO: rc: 1 Jan 9 15:09:17.033: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + + echonc hostName -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:09:17.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:09:20.047: INFO: rc: 1 Jan 9 15:09:20.047: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + + echonc hostName -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:09:20.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:09:23.100: INFO: rc: 1 Jan 9 15:09:23.101: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:09:23.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:09:26.048: INFO: rc: 1 Jan 9 15:09:26.048: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName+ nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:09:26.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 9 15:09:28.381: INFO: rc: 1 Jan 9 15:09:28.381: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7663 exec execpod-affinitypt9ml -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 affinity-clusterip 80 nc: connect to affinity-clusterip port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:09:28.382: FAIL: Unexpected error: <*errors.errorString | 0xc005db43e0>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x7101987, {0x7b06bd0, 0xc0012dfe00}, 0xc000d58280, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 +0x669 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3266 k8s.io/kubernetes/test/e2e/network.glob..func24.24() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2067 +0x8d k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000277d40, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Jan 9 15:09:28.382: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-7663, will wait for the garbage collector to delete the pods Jan 9 15:09:28.482: INFO: Deleting ReplicationController affinity-clusterip took: 9.567297ms Jan 9 15:09:28.583: INFO: Terminating ReplicationController affinity-clusterip pods took: 101.159461ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:09:30.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7663" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [133.613 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:09:28.382: Unexpected error: <*errors.errorString | 0xc005db43e0>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":3,"skipped":106,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:09:31.131: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of events �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete a list of events Jan 9 15:09:31.222: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:09:31.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-2053" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":-1,"completed":4,"skipped":122,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:09:31.313: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 9 15:09:31.389: INFO: The status of Pod labelsupdate7881d848-62bb-437c-8d5a-b78140de11a4 is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:09:33.397: INFO: The status of Pod labelsupdate7881d848-62bb-437c-8d5a-b78140de11a4 is Running (Ready = true) Jan 9 15:09:33.964: INFO: Successfully updated pod "labelsupdate7881d848-62bb-437c-8d5a-b78140de11a4" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:09:38.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8676" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":127,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:03:41.470: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-7729 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 9 15:03:41.519: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 9 15:03:41.606: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:03:43.611: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:03:45.612: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:03:47.611: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:03:49.633: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:03:51.613: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:03:53.612: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:03:55.613: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:03:57.613: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:03:59.612: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:04:01.612: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:04:03.612: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 9 15:04:03.618: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 9 15:04:03.623: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 9 15:04:03.629: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 9 15:04:05.648: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 9 15:04:05.648: INFO: Breadth first check of 192.168.0.8 on host 172.18.0.4... Jan 9 15:04:05.651: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.0.8&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:04:05.651: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:04:05.652: INFO: ExecWithOptions: Clientset creation Jan 9 15:04:05.652: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.8%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:04:05.730: INFO: Waiting for responses: map[] Jan 9 15:04:05.730: INFO: reached 192.168.0.8 after 0/1 tries Jan 9 15:04:05.730: INFO: Breadth first check of 192.168.1.13 on host 172.18.0.6... Jan 9 15:04:05.733: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.1.13&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:04:05.733: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:04:05.734: INFO: ExecWithOptions: Clientset creation Jan 9 15:04:05.734: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.13%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:04:05.827: INFO: Waiting for responses: map[] Jan 9 15:04:05.828: INFO: reached 192.168.1.13 after 0/1 tries Jan 9 15:04:05.828: INFO: Breadth first check of 192.168.6.9 on host 172.18.0.5... Jan 9 15:04:05.833: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.6.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:04:05.833: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:04:05.834: INFO: ExecWithOptions: Clientset creation Jan 9 15:04:05.834: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.6.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:04:05.914: INFO: Waiting for responses: map[] Jan 9 15:04:05.914: INFO: reached 192.168.6.9 after 0/1 tries Jan 9 15:04:05.914: INFO: Breadth first check of 192.168.2.9 on host 172.18.0.7... Jan 9 15:04:05.918: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:04:05.918: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:04:05.919: INFO: ExecWithOptions: Clientset creation Jan 9 15:04:05.919: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:04:10.997: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:04:12.999: INFO: Output of kubectl describe pod pod-network-test-7729/netserver-0: Jan 9 15:04:12.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7729 describe pod netserver-0 --namespace=pod-network-test-7729' Jan 9 15:04:13.101: INFO: stderr: "" Jan 9 15:04:13.101: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-7729\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv/172.18.0.4\nStart Time: Mon, 09 Jan 2023 15:03:41 +0000\nLabels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.8\nIPs:\n IP: 192.168.0.8\nContainers:\n webserver:\n Container ID: containerd://d482b9686c93dc24086e4623bc59b00ba3b821fe04fd5933f5e986e9eba4dc2a\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:03:42 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-245h6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-245h6:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7729/netserver-0 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv\n Normal Pulled 31s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 31s kubelet Created container webserver\n Normal Started 31s kubelet Started container webserver\n" Jan 9 15:04:13.101: INFO: Name: netserver-0 Namespace: pod-network-test-7729 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv/172.18.0.4 Start Time: Mon, 09 Jan 2023 15:03:41 +0000 Labels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true Annotations: <none> Status: Running IP: 192.168.0.8 IPs: IP: 192.168.0.8 Containers: webserver: Container ID: containerd://d482b9686c93dc24086e4623bc59b00ba3b821fe04fd5933f5e986e9eba4dc2a Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:03:42 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-245h6 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-245h6: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7729/netserver-0 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv Normal Pulled 31s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 31s kubelet Created container webserver Normal Started 31s kubelet Started container webserver Jan 9 15:04:13.101: INFO: Output of kubectl describe pod pod-network-test-7729/netserver-1: Jan 9 15:04:13.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7729 describe pod netserver-1 --namespace=pod-network-test-7729' Jan 9 15:04:13.208: INFO: stderr: "" Jan 9 15:04:13.208: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-7729\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv/172.18.0.6\nStart Time: Mon, 09 Jan 2023 15:03:41 +0000\nLabels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.13\nIPs:\n IP: 192.168.1.13\nContainers:\n webserver:\n Container ID: containerd://c89b857c8d0a5e0ca35c933fd7aaee668c54f692964b85840c5c563be2ab15b4\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:03:42 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-phhs4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-phhs4:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7729/netserver-1 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv\n Normal Pulled 31s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 31s kubelet Created container webserver\n Normal Started 31s kubelet Started container webserver\n" Jan 9 15:04:13.208: INFO: Name: netserver-1 Namespace: pod-network-test-7729 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv/172.18.0.6 Start Time: Mon, 09 Jan 2023 15:03:41 +0000 Labels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true Annotations: <none> Status: Running IP: 192.168.1.13 IPs: IP: 192.168.1.13 Containers: webserver: Container ID: containerd://c89b857c8d0a5e0ca35c933fd7aaee668c54f692964b85840c5c563be2ab15b4 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:03:42 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-phhs4 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-phhs4: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7729/netserver-1 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv Normal Pulled 31s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 31s kubelet Created container webserver Normal Started 31s kubelet Started container webserver Jan 9 15:04:13.208: INFO: Output of kubectl describe pod pod-network-test-7729/netserver-2: Jan 9 15:04:13.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7729 describe pod netserver-2 --namespace=pod-network-test-7729' Jan 9 15:04:13.309: INFO: stderr: "" Jan 9 15:04:13.309: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-7729\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-worker-1r6syi/172.18.0.5\nStart Time: Mon, 09 Jan 2023 15:03:41 +0000\nLabels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.9\nIPs:\n IP: 192.168.6.9\nContainers:\n webserver:\n Container ID: containerd://146e1445d151e4d7c0ba844d6f99d0991d97398ebc5addc853beaa769ed5b0d0\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:03:42 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zkg6b (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-zkg6b:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-1r6syi\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7729/netserver-2 to k8s-upgrade-and-conformance-viu2kk-worker-1r6syi\n Normal Pulled 31s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 31s kubelet Created container webserver\n Normal Started 31s kubelet Started container webserver\n" Jan 9 15:04:13.309: INFO: Name: netserver-2 Namespace: pod-network-test-7729 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-worker-1r6syi/172.18.0.5 Start Time: Mon, 09 Jan 2023 15:03:41 +0000 Labels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true Annotations: <none> Status: Running IP: 192.168.6.9 IPs: IP: 192.168.6.9 Containers: webserver: Container ID: containerd://146e1445d151e4d7c0ba844d6f99d0991d97398ebc5addc853beaa769ed5b0d0 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:03:42 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zkg6b (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-zkg6b: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-1r6syi Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7729/netserver-2 to k8s-upgrade-and-conformance-viu2kk-worker-1r6syi Normal Pulled 31s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 31s kubelet Created container webserver Normal Started 31s kubelet Started container webserver Jan 9 15:04:13.309: INFO: Output of kubectl describe pod pod-network-test-7729/netserver-3: Jan 9 15:04:13.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7729 describe pod netserver-3 --namespace=pod-network-test-7729' Jan 9 15:04:13.406: INFO: stderr: "" Jan 9 15:04:13.406: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-7729\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b/172.18.0.7\nStart Time: Mon, 09 Jan 2023 15:03:41 +0000\nLabels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.2.9\nIPs:\n IP: 192.168.2.9\nContainers:\n webserver:\n Container ID: containerd://71412eecaec3df70dde2c0be6d6230a6e2f603d5953eca24fae570ed4ca53e23\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:03:43 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4rlqb (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-4rlqb:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7729/netserver-3 to k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b\n Warning FailedMount 31s kubelet MountVolume.SetUp failed for volume \"kube-api-access-4rlqb\" : failed to sync configmap cache: timed out waiting for the condition\n Normal Pulled 30s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 30s kubelet Created container webserver\n Normal Started 30s kubelet Started container webserver\n" Jan 9 15:04:13.407: INFO: Name: netserver-3 Namespace: pod-network-test-7729 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b/172.18.0.7 Start Time: Mon, 09 Jan 2023 15:03:41 +0000 Labels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true Annotations: <none> Status: Running IP: 192.168.2.9 IPs: IP: 192.168.2.9 Containers: webserver: Container ID: containerd://71412eecaec3df70dde2c0be6d6230a6e2f603d5953eca24fae570ed4ca53e23 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:03:43 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4rlqb (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-4rlqb: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 31s default-scheduler Successfully assigned pod-network-test-7729/netserver-3 to k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b Warning FailedMount 31s kubelet MountVolume.SetUp failed for volume "kube-api-access-4rlqb" : failed to sync configmap cache: timed out waiting for the condition Normal Pulled 30s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 30s kubelet Created container webserver Normal Started 30s kubelet Started container webserver Jan 9 15:04:13.407: INFO: encountered error during dial (did not find expected responses... Tries 1 Command curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1' retrieved map[] expected map[netserver-3:{}]) Jan 9 15:04:13.407: INFO: ...failed...will try again in next pass Jan 9 15:04:13.407: INFO: Going to retry 1 out of 4 pods.... Jan 9 15:04:13.407: INFO: Doublechecking 1 pods in host 172.18.0.7 which weren't seen the first time. Jan 9 15:04:13.407: INFO: Now attempting to probe pod [[[ 192.168.2.9 ]]] Jan 9 15:04:13.410: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:04:13.410: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:04:13.411: INFO: ExecWithOptions: Clientset creation Jan 9 15:04:13.411: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:04:18.509: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:04:20.515: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:04:20.515: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:04:20.518: INFO: ExecWithOptions: Clientset creation Jan 9 15:04:20.518: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:04:25.639: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:04:27.644: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:04:27.644: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:04:27.645: INFO: ExecWithOptions: Clientset creation Jan 9 15:04:27.645: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:04:32.734: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:04:34.739: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:04:34.739: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:04:34.740: INFO: ExecWithOptions: Clientset creation Jan 9 15:04:34.740: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:04:39.850: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:04:41.855: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:04:41.855: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:04:41.856: INFO: ExecWithOptions: Clientset creation Jan 9 15:04:41.856: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:04:46.936: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:04:48.941: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:04:48.941: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:04:48.942: INFO: ExecWithOptions: Clientset creation Jan 9 15:04:48.942: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:04:54.041: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:04:56.046: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:04:56.046: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:04:56.047: INFO: ExecWithOptions: Clientset creation Jan 9 15:04:56.047: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:05:01.137: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:05:03.141: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:05:03.141: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:05:03.141: INFO: ExecWithOptions: Clientset creation Jan 9 15:05:03.142: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:05:08.229: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:05:10.234: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:05:10.234: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:05:10.235: INFO: ExecWithOptions: Clientset creation Jan 9 15:05:10.235: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:05:15.348: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:05:17.353: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:05:17.353: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:05:17.354: INFO: ExecWithOptions: Clientset creation Jan 9 15:05:17.354: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:05:22.448: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:05:24.453: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:05:24.453: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:05:24.454: INFO: ExecWithOptions: Clientset creation Jan 9 15:05:24.454: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:05:29.549: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:05:31.553: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:05:31.553: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:05:31.554: INFO: ExecWithOptions: Clientset creation Jan 9 15:05:31.555: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:05:36.635: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:05:38.640: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:05:38.640: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:05:38.641: INFO: ExecWithOptions: Clientset creation Jan 9 15:05:38.641: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:05:43.723: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:05:45.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:05:45.728: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:05:45.729: INFO: ExecWithOptions: Clientset creation Jan 9 15:05:45.729: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:05:50.819: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:05:52.823: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:05:52.823: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:05:52.824: INFO: ExecWithOptions: Clientset creation Jan 9 15:05:52.824: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:05:57.911: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:05:59.917: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:05:59.917: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:05:59.918: INFO: ExecWithOptions: Clientset creation Jan 9 15:05:59.918: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:06:05.011: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:06:07.016: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:06:07.016: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:06:07.017: INFO: ExecWithOptions: Clientset creation Jan 9 15:06:07.017: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:06:12.101: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:06:14.107: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:06:14.108: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:06:14.108: INFO: ExecWithOptions: Clientset creation Jan 9 15:06:14.108: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:06:19.223: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:06:21.228: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:06:21.229: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:06:21.229: INFO: ExecWithOptions: Clientset creation Jan 9 15:06:21.229: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:06:26.322: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:06:28.327: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:06:28.327: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:06:28.328: INFO: ExecWithOptions: Clientset creation Jan 9 15:06:28.328: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:06:33.406: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:06:35.411: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:06:35.411: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:06:35.411: INFO: ExecWithOptions: Clientset creation Jan 9 15:06:35.411: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:06:40.507: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:06:42.513: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:06:42.513: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:06:42.514: INFO: ExecWithOptions: Clientset creation Jan 9 15:06:42.514: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:06:47.602: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:06:49.609: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:06:49.609: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:06:49.610: INFO: ExecWithOptions: Clientset creation Jan 9 15:06:49.610: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:06:54.709: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:06:56.715: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:06:56.715: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:06:56.716: INFO: ExecWithOptions: Clientset creation Jan 9 15:06:56.716: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:07:01.804: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:07:03.809: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:07:03.809: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:07:03.811: INFO: ExecWithOptions: Clientset creation Jan 9 15:07:03.811: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:07:08.886: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:07:10.891: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:07:10.891: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:07:10.892: INFO: ExecWithOptions: Clientset creation Jan 9 15:07:10.892: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:07:16.001: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:07:18.006: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:07:18.006: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:07:18.008: INFO: ExecWithOptions: Clientset creation Jan 9 15:07:18.008: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:07:23.137: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:07:25.142: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:07:25.142: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:07:25.143: INFO: ExecWithOptions: Clientset creation Jan 9 15:07:25.143: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:07:30.248: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:07:32.253: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:07:32.253: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:07:32.254: INFO: ExecWithOptions: Clientset creation Jan 9 15:07:32.254: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:07:37.342: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:07:39.347: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:07:39.347: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:07:39.348: INFO: ExecWithOptions: Clientset creation Jan 9 15:07:39.348: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:07:44.438: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:07:46.443: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:07:46.443: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:07:46.444: INFO: ExecWithOptions: Clientset creation Jan 9 15:07:46.444: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:07:51.530: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:07:53.535: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:07:53.535: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:07:53.536: INFO: ExecWithOptions: Clientset creation Jan 9 15:07:53.536: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:07:58.618: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:08:00.623: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:08:00.623: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:08:00.624: INFO: ExecWithOptions: Clientset creation Jan 9 15:08:00.624: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:08:05.709: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:08:07.714: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:08:07.714: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:08:07.715: INFO: ExecWithOptions: Clientset creation Jan 9 15:08:07.715: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:08:12.788: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:08:14.794: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:08:14.794: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:08:14.796: INFO: ExecWithOptions: Clientset creation Jan 9 15:08:14.796: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:08:19.924: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:08:21.930: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:08:21.930: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:08:21.931: INFO: ExecWithOptions: Clientset creation Jan 9 15:08:21.931: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:08:27.015: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:08:29.021: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:08:29.021: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:08:29.022: INFO: ExecWithOptions: Clientset creation Jan 9 15:08:29.022: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:08:34.111: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:08:36.117: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:08:36.117: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:08:36.118: INFO: ExecWithOptions: Clientset creation Jan 9 15:08:36.118: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:08:41.200: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:08:43.205: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:08:43.205: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:08:43.206: INFO: ExecWithOptions: Clientset creation Jan 9 15:08:43.206: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:08:48.309: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:08:50.317: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:08:50.317: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:08:50.318: INFO: ExecWithOptions: Clientset creation Jan 9 15:08:50.318: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:08:55.458: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:08:57.464: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:08:57.464: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:08:57.465: INFO: ExecWithOptions: Clientset creation Jan 9 15:08:57.465: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:09:02.632: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:09:04.641: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:09:04.642: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:09:04.642: INFO: ExecWithOptions: Clientset creation Jan 9 15:09:04.642: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:09:09.769: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:09:11.777: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:09:11.777: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:09:11.778: INFO: ExecWithOptions: Clientset creation Jan 9 15:09:11.778: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:09:16.926: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:09:18.935: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:09:18.935: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:09:18.937: INFO: ExecWithOptions: Clientset creation Jan 9 15:09:18.937: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:09:24.068: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:09:26.078: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:09:26.078: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:09:26.080: INFO: ExecWithOptions: Clientset creation Jan 9 15:09:26.080: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:09:31.239: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:09:33.246: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1'] Namespace:pod-network-test-7729 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:09:33.246: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:09:33.248: INFO: ExecWithOptions: Clientset creation Jan 9 15:09:33.249: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-7729/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.13%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.9%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:09:38.405: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:09:40.408: INFO: Output of kubectl describe pod pod-network-test-7729/netserver-0: Jan 9 15:09:40.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7729 describe pod netserver-0 --namespace=pod-network-test-7729' Jan 9 15:09:40.660: INFO: stderr: "" Jan 9 15:09:40.660: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-7729\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv/172.18.0.4\nStart Time: Mon, 09 Jan 2023 15:03:41 +0000\nLabels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.8\nIPs:\n IP: 192.168.0.8\nContainers:\n webserver:\n Container ID: containerd://d482b9686c93dc24086e4623bc59b00ba3b821fe04fd5933f5e986e9eba4dc2a\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:03:42 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-245h6 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-245h6:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m59s default-scheduler Successfully assigned pod-network-test-7729/netserver-0 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv\n Normal Pulled 5m58s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 5m58s kubelet Created container webserver\n Normal Started 5m58s kubelet Started container webserver\n" Jan 9 15:09:40.660: INFO: Name: netserver-0 Namespace: pod-network-test-7729 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv/172.18.0.4 Start Time: Mon, 09 Jan 2023 15:03:41 +0000 Labels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true Annotations: <none> Status: Running IP: 192.168.0.8 IPs: IP: 192.168.0.8 Containers: webserver: Container ID: containerd://d482b9686c93dc24086e4623bc59b00ba3b821fe04fd5933f5e986e9eba4dc2a Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:03:42 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-245h6 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-245h6: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m59s default-scheduler Successfully assigned pod-network-test-7729/netserver-0 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv Normal Pulled 5m58s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 5m58s kubelet Created container webserver Normal Started 5m58s kubelet Started container webserver Jan 9 15:09:40.660: INFO: Output of kubectl describe pod pod-network-test-7729/netserver-1: Jan 9 15:09:40.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7729 describe pod netserver-1 --namespace=pod-network-test-7729' Jan 9 15:09:41.013: INFO: stderr: "" Jan 9 15:09:41.013: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-7729\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv/172.18.0.6\nStart Time: Mon, 09 Jan 2023 15:03:41 +0000\nLabels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.13\nIPs:\n IP: 192.168.1.13\nContainers:\n webserver:\n Container ID: containerd://c89b857c8d0a5e0ca35c933fd7aaee668c54f692964b85840c5c563be2ab15b4\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:03:42 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-phhs4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-phhs4:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m59s default-scheduler Successfully assigned pod-network-test-7729/netserver-1 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv\n Normal Pulled 5m59s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 5m59s kubelet Created container webserver\n Normal Started 5m59s kubelet Started container webserver\n" Jan 9 15:09:41.013: INFO: Name: netserver-1 Namespace: pod-network-test-7729 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv/172.18.0.6 Start Time: Mon, 09 Jan 2023 15:03:41 +0000 Labels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true Annotations: <none> Status: Running IP: 192.168.1.13 IPs: IP: 192.168.1.13 Containers: webserver: Container ID: containerd://c89b857c8d0a5e0ca35c933fd7aaee668c54f692964b85840c5c563be2ab15b4 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:03:42 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-phhs4 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-phhs4: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m59s default-scheduler Successfully assigned pod-network-test-7729/netserver-1 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv Normal Pulled 5m59s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 5m59s kubelet Created container webserver Normal Started 5m59s kubelet Started container webserver Jan 9 15:09:41.013: INFO: Output of kubectl describe pod pod-network-test-7729/netserver-2: Jan 9 15:09:41.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7729 describe pod netserver-2 --namespace=pod-network-test-7729' Jan 9 15:09:41.319: INFO: stderr: "" Jan 9 15:09:41.319: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-7729\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-worker-1r6syi/172.18.0.5\nStart Time: Mon, 09 Jan 2023 15:03:41 +0000\nLabels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.9\nIPs:\n IP: 192.168.6.9\nContainers:\n webserver:\n Container ID: containerd://146e1445d151e4d7c0ba844d6f99d0991d97398ebc5addc853beaa769ed5b0d0\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:03:42 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zkg6b (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-zkg6b:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-1r6syi\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m59s default-scheduler Successfully assigned pod-network-test-7729/netserver-2 to k8s-upgrade-and-conformance-viu2kk-worker-1r6syi\n Normal Pulled 5m59s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 5m59s kubelet Created container webserver\n Normal Started 5m59s kubelet Started container webserver\n" Jan 9 15:09:41.319: INFO: Name: netserver-2 Namespace: pod-network-test-7729 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-worker-1r6syi/172.18.0.5 Start Time: Mon, 09 Jan 2023 15:03:41 +0000 Labels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true Annotations: <none> Status: Running IP: 192.168.6.9 IPs: IP: 192.168.6.9 Containers: webserver: Container ID: containerd://146e1445d151e4d7c0ba844d6f99d0991d97398ebc5addc853beaa769ed5b0d0 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:03:42 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zkg6b (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-zkg6b: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-1r6syi Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m59s default-scheduler Successfully assigned pod-network-test-7729/netserver-2 to k8s-upgrade-and-conformance-viu2kk-worker-1r6syi Normal Pulled 5m59s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 5m59s kubelet Created container webserver Normal Started 5m59s kubelet Started container webserver Jan 9 15:09:41.319: INFO: Output of kubectl describe pod pod-network-test-7729/netserver-3: Jan 9 15:09:41.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-7729 describe pod netserver-3 --namespace=pod-network-test-7729' Jan 9 15:09:41.606: INFO: stderr: "" Jan 9 15:09:41.606: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-7729\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b/172.18.0.7\nStart Time: Mon, 09 Jan 2023 15:03:41 +0000\nLabels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.2.9\nIPs:\n IP: 192.168.2.9\nContainers:\n webserver:\n Container ID: containerd://71412eecaec3df70dde2c0be6d6230a6e2f603d5953eca24fae570ed4ca53e23\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:03:43 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4rlqb (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-4rlqb:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5m59s default-scheduler Successfully assigned pod-network-test-7729/netserver-3 to k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b\n Warning FailedMount 5m59s kubelet MountVolume.SetUp failed for volume \"kube-api-access-4rlqb\" : failed to sync configmap cache: timed out waiting for the condition\n Normal Pulled 5m58s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 5m58s kubelet Created container webserver\n Normal Started 5m58s kubelet Started container webserver\n" Jan 9 15:09:41.607: INFO: Name: netserver-3 Namespace: pod-network-test-7729 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b/172.18.0.7 Start Time: Mon, 09 Jan 2023 15:03:41 +0000 Labels: selector-79e9e83b-9dff-4884-b628-c383aa098a80=true Annotations: <none> Status: Running IP: 192.168.2.9 IPs: IP: 192.168.2.9 Containers: webserver: Container ID: containerd://71412eecaec3df70dde2c0be6d6230a6e2f603d5953eca24fae570ed4ca53e23 Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:03:43 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4rlqb (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-4rlqb: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 5m59s default-scheduler Successfully assigned pod-network-test-7729/netserver-3 to k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b Warning FailedMount 5m59s kubelet MountVolume.SetUp failed for volume "kube-api-access-4rlqb" : failed to sync configmap cache: timed out waiting for the condition Normal Pulled 5m58s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 5m58s kubelet Created container webserver Normal Started 5m58s kubelet Started container webserver Jan 9 15:09:41.607: INFO: encountered error during dial (did not find expected responses... Tries 46 Command curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1' retrieved map[] expected map[netserver-3:{}]) Jan 9 15:09:41.607: INFO: ... Done probing pod [[[ 192.168.2.9 ]]] Jan 9 15:09:41.607: INFO: succeeded at polling 3 out of 4 connections Jan 9 15:09:41.607: INFO: pod polling failure summary: Jan 9 15:09:41.607: INFO: Collected error: did not find expected responses... Tries 46 Command curl -g -q -s 'http://192.168.0.13:9080/dial?request=hostname&protocol=http&host=192.168.2.9&port=8083&tries=1' retrieved map[] expected map[netserver-3:{}] Jan 9 15:09:41.608: FAIL: failed, 1 out of 4 connections failed Full Stack Trace k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x46 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000940d00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:09:41.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-7729" for this suite. �[91m�[1m• Failure [360.176 seconds]�[0m [sig-network] Networking �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23�[0m Granular Checks: Pods �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30�[0m �[91m�[1mshould function for intra-pod communication: http [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:09:41.608: failed, 1 out of 4 connections failed�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:09:38.137: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-2141 Jan 9 15:09:38.196: INFO: The status of Pod kube-proxy-mode-detector is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:09:40.206: INFO: The status of Pod kube-proxy-mode-detector is Running (Ready = true) Jan 9 15:09:40.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2141 exec kube-proxy-mode-detector -- /bin/sh -x -c curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode' Jan 9 15:09:40.586: INFO: stderr: "+ curl -q -s --connect-timeout 1 http://localhost:10249/proxyMode\n" Jan 9 15:09:40.586: INFO: stdout: "iptables" Jan 9 15:09:40.586: INFO: proxyMode: iptables Jan 9 15:09:40.601: INFO: Waiting for pod kube-proxy-mode-detector to disappear Jan 9 15:09:40.609: INFO: Pod kube-proxy-mode-detector no longer exists �[1mSTEP�[0m: creating service affinity-nodeport-timeout in namespace services-2141 �[1mSTEP�[0m: creating replication controller affinity-nodeport-timeout in namespace services-2141 I0109 15:09:40.675821 18 runners.go:193] Created replication controller with name: affinity-nodeport-timeout, namespace: services-2141, replica count: 3 I0109 15:09:43.731464 18 runners.go:193] affinity-nodeport-timeout Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 9 15:09:43.755: INFO: Creating new exec pod Jan 9 15:09:46.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2141 exec execpod-affinitynvgtc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport-timeout 80' Jan 9 15:09:47.331: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport-timeout 80\nConnection to affinity-nodeport-timeout 80 port [tcp/http] succeeded!\n" Jan 9 15:09:47.331: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 9 15:09:47.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2141 exec execpod-affinitynvgtc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.142.104.241 80' Jan 9 15:09:47.848: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.142.104.241 80\nConnection to 10.142.104.241 80 port [tcp/http] succeeded!\n" Jan 9 15:09:47.848: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 9 15:09:47.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2141 exec execpod-affinitynvgtc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 30236' Jan 9 15:09:48.221: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 30236\nConnection to 172.18.0.5 30236 port [tcp/*] succeeded!\n" Jan 9 15:09:48.221: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 9 15:09:48.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2141 exec execpod-affinitynvgtc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 30236' Jan 9 15:09:48.549: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.7 30236\nConnection to 172.18.0.7 30236 port [tcp/*] succeeded!\n" Jan 9 15:09:48.549: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 9 15:09:48.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2141 exec execpod-affinitynvgtc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.4:30236/ ; done' Jan 9 15:09:49.162: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n" Jan 9 15:09:49.162: INFO: stdout: "\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr\naffinity-nodeport-timeout-t2slr" Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.162: INFO: Received response from host: affinity-nodeport-timeout-t2slr Jan 9 15:09:49.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2141 exec execpod-affinitynvgtc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.4:30236/' Jan 9 15:09:49.595: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n" Jan 9 15:09:49.595: INFO: stdout: "affinity-nodeport-timeout-t2slr" Jan 9 15:10:09.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2141 exec execpod-affinitynvgtc -- /bin/sh -x -c curl -q -s --connect-timeout 2 http://172.18.0.4:30236/' Jan 9 15:10:09.910: INFO: stderr: "+ curl -q -s --connect-timeout 2 http://172.18.0.4:30236/\n" Jan 9 15:10:09.910: INFO: stdout: "affinity-nodeport-timeout-vr9n4" Jan 9 15:10:09.910: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport-timeout in namespace services-2141, will wait for the garbage collector to delete the pods Jan 9 15:10:10.005: INFO: Deleting ReplicationController affinity-nodeport-timeout took: 12.553384ms Jan 9 15:10:10.111: INFO: Terminating ReplicationController affinity-nodeport-timeout pods took: 106.201383ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:10:12.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2141" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":155,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:10:12.379: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with configMap that has name projected-configmap-test-upd-3b07a28b-6a30-4405-a9fe-218ea95d0df9 �[1mSTEP�[0m: Creating the pod Jan 9 15:10:12.494: INFO: The status of Pod pod-projected-configmaps-cab7f35c-d001-45ab-91df-110af5b493d5 is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:10:14.508: INFO: The status of Pod pod-projected-configmaps-cab7f35c-d001-45ab-91df-110af5b493d5 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap projected-configmap-test-upd-3b07a28b-6a30-4405-a9fe-218ea95d0df9 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:10:16.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-538" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":166,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:10:16.642: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 9 15:10:16.699: INFO: Waiting up to 5m0s for pod "downward-api-9593e9e1-47e9-4479-960d-156053f95d28" in namespace "downward-api-6902" to be "Succeeded or Failed" Jan 9 15:10:16.714: INFO: Pod "downward-api-9593e9e1-47e9-4479-960d-156053f95d28": Phase="Pending", Reason="", readiness=false. Elapsed: 13.940697ms Jan 9 15:10:18.725: INFO: Pod "downward-api-9593e9e1-47e9-4479-960d-156053f95d28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024972123s Jan 9 15:10:20.733: INFO: Pod "downward-api-9593e9e1-47e9-4479-960d-156053f95d28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032929338s Jan 9 15:10:22.741: INFO: Pod "downward-api-9593e9e1-47e9-4479-960d-156053f95d28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04122646s �[1mSTEP�[0m: Saw pod success Jan 9 15:10:22.741: INFO: Pod "downward-api-9593e9e1-47e9-4479-960d-156053f95d28" satisfied condition "Succeeded or Failed" Jan 9 15:10:22.747: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv pod downward-api-9593e9e1-47e9-4479-960d-156053f95d28 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:10:22.785: INFO: Waiting for pod downward-api-9593e9e1-47e9-4479-960d-156053f95d28 to disappear Jan 9 15:10:22.791: INFO: Pod downward-api-9593e9e1-47e9-4479-960d-156053f95d28 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:10:22.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6902" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":181,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:10:22.928: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test externalName service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5793.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5793.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5793.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5793.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 9 15:10:35.037: INFO: DNS probes using dns-test-e46ecda3-c6ce-4eae-bfe4-added2647a40 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the externalName to bar.example.com �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5793.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5793.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5793.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5793.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a second pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 9 15:10:39.120: INFO: File wheezy_udp@dns-test-service-3.dns-5793.svc.cluster.local from pod dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 9 15:10:39.128: INFO: File jessie_udp@dns-test-service-3.dns-5793.svc.cluster.local from pod dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 9 15:10:39.128: INFO: Lookups using dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 failed for: [wheezy_udp@dns-test-service-3.dns-5793.svc.cluster.local jessie_udp@dns-test-service-3.dns-5793.svc.cluster.local] Jan 9 15:10:44.137: INFO: File wheezy_udp@dns-test-service-3.dns-5793.svc.cluster.local from pod dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 9 15:10:44.147: INFO: File jessie_udp@dns-test-service-3.dns-5793.svc.cluster.local from pod dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 9 15:10:44.147: INFO: Lookups using dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 failed for: [wheezy_udp@dns-test-service-3.dns-5793.svc.cluster.local jessie_udp@dns-test-service-3.dns-5793.svc.cluster.local] Jan 9 15:10:49.137: INFO: File wheezy_udp@dns-test-service-3.dns-5793.svc.cluster.local from pod dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 9 15:10:49.143: INFO: File jessie_udp@dns-test-service-3.dns-5793.svc.cluster.local from pod dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 9 15:10:49.143: INFO: Lookups using dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 failed for: [wheezy_udp@dns-test-service-3.dns-5793.svc.cluster.local jessie_udp@dns-test-service-3.dns-5793.svc.cluster.local] Jan 9 15:10:54.136: INFO: File wheezy_udp@dns-test-service-3.dns-5793.svc.cluster.local from pod dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 9 15:10:54.144: INFO: File jessie_udp@dns-test-service-3.dns-5793.svc.cluster.local from pod dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 9 15:10:54.144: INFO: Lookups using dns-5793/dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 failed for: [wheezy_udp@dns-test-service-3.dns-5793.svc.cluster.local jessie_udp@dns-test-service-3.dns-5793.svc.cluster.local] Jan 9 15:10:59.149: INFO: DNS probes using dns-test-247c9587-c591-42fe-a6bf-4aff1f5d23f1 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the service to type=ClusterIP �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5793.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5793.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5793.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5793.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a third pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 9 15:11:03.298: INFO: DNS probes using dns-test-3469e14c-35b6-4239-8044-7e8cb9f9bf31 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:11:03.363: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-5793" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":9,"skipped":225,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:11:03.424: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename limitrange �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a LimitRange �[1mSTEP�[0m: Setting up watch �[1mSTEP�[0m: Submitting a LimitRange Jan 9 15:11:03.506: INFO: observed the limitRanges list �[1mSTEP�[0m: Verifying LimitRange creation was observed �[1mSTEP�[0m: Fetching the LimitRange to ensure it has proper values Jan 9 15:11:03.525: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 9 15:11:03.525: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with no resource requirements �[1mSTEP�[0m: Ensuring Pod has resource requirements applied from LimitRange Jan 9 15:11:03.542: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 9 15:11:03.542: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with partial resource requirements �[1mSTEP�[0m: Ensuring Pod has merged resource requirements applied from LimitRange Jan 9 15:11:03.563: INFO: Verifying requests: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] Jan 9 15:11:03.564: INFO: Verifying limits: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Failing to create a Pod with less than min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Updating a LimitRange �[1mSTEP�[0m: Verifying LimitRange updating is effective �[1mSTEP�[0m: Creating a Pod with less than former min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Deleting a LimitRange �[1mSTEP�[0m: Verifying the LimitRange was deleted Jan 9 15:11:10.683: INFO: limitRange is already deleted �[1mSTEP�[0m: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:11:10.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "limitrange-3507" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":10,"skipped":226,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:11:10.781: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-0edf8a1d-74f2-417a-b240-8d889c7b6ea1 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 9 15:11:10.850: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-633098a7-e361-464d-b2d2-a12f6634d179" in namespace "projected-3546" to be "Succeeded or Failed" Jan 9 15:11:10.864: INFO: Pod "pod-projected-secrets-633098a7-e361-464d-b2d2-a12f6634d179": Phase="Pending", Reason="", readiness=false. Elapsed: 13.380043ms Jan 9 15:11:12.873: INFO: Pod "pod-projected-secrets-633098a7-e361-464d-b2d2-a12f6634d179": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022668656s Jan 9 15:11:14.883: INFO: Pod "pod-projected-secrets-633098a7-e361-464d-b2d2-a12f6634d179": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032468894s �[1mSTEP�[0m: Saw pod success Jan 9 15:11:14.883: INFO: Pod "pod-projected-secrets-633098a7-e361-464d-b2d2-a12f6634d179" satisfied condition "Succeeded or Failed" Jan 9 15:11:14.890: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b pod pod-projected-secrets-633098a7-e361-464d-b2d2-a12f6634d179 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:11:14.919: INFO: Waiting for pod pod-projected-secrets-633098a7-e361-464d-b2d2-a12f6634d179 to disappear Jan 9 15:11:14.926: INFO: Pod pod-projected-secrets-633098a7-e361-464d-b2d2-a12f6634d179 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:11:14.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3546" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":239,"failed":3,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:04:35.411: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:04:35.440: INFO: Creating ReplicaSet my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0 Jan 9 15:04:35.448: INFO: Pod name my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0: Found 0 pods out of 1 Jan 9 15:04:40.452: INFO: Pod name my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0: Found 1 pods out of 1 Jan 9 15:04:40.452: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0" is running Jan 9 15:04:40.455: INFO: Pod "my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0-8nchl" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-09 15:04:35 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-09 15:04:36 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-09 15:04:36 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-09 15:04:35 +0000 UTC Reason: Message:}]) Jan 9 15:04:40.455: INFO: Trying to dial the pod Jan 9 15:08:19.020: INFO: Controller my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0: Failed to GET from replica 1 [my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0-8nchl]: the server is currently unable to handle the request (get pods my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0-8nchl) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 9, 15, 4, 35, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 9, 15, 4, 36, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 9, 15, 4, 36, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 9, 15, 4, 35, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"192.168.2.17", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.17"}}, StartTime:time.Date(2023, time.January, 9, 15, 4, 35, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc004c403f0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.39", ImageID:"k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e", ContainerID:"containerd://843935ede1e08fe046ca727f42b0b257e151d904ad3952b82d029a84a6036cbb", Started:(*bool)(0xc000cbc9da)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Jan 9 15:11:52.018: INFO: Controller my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0: Failed to GET from replica 1 [my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0-8nchl]: the server is currently unable to handle the request (get pods my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0-8nchl) pod status: v1.PodStatus{Phase:"Running", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 9, 15, 4, 35, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"Ready", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 9, 15, 4, 36, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"ContainersReady", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 9, 15, 4, 36, 0, time.Local), Reason:"", Message:""}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(2023, time.January, 9, 15, 4, 35, 0, time.Local), Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.7", PodIP:"192.168.2.17", PodIPs:[]v1.PodIP{v1.PodIP{IP:"192.168.2.17"}}, StartTime:time.Date(2023, time.January, 9, 15, 4, 35, 0, time.Local), InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"my-hostname-basic-7bc53616-e78f-45f6-8ed4-95bd4ecb43a0", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc004c403f0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:true, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/agnhost:2.39", ImageID:"k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e", ContainerID:"containerd://843935ede1e08fe046ca727f42b0b257e151d904ad3952b82d029a84a6036cbb", Started:(*bool)(0xc000cbc9da)}}, QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Jan 9 15:11:52.018: FAIL: Did not get expected responses within the timeout period of 120.00 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func8.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:110 +0x37 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0002321a0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:11:52.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-6525" for this suite. �[91m�[1m• Failure [436.630 seconds]�[0m [sig-apps] ReplicaSet �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m �[91m�[1mshould serve a basic image on each replica with a public image [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:11:52.018: Did not get expected responses within the timeout period of 120.00 seconds.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/replica_set.go:110 �[90m------------------------------�[0m {"msg":"FAILED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":8,"skipped":169,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:11:52.045: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:11:52.083: INFO: Creating ReplicaSet my-hostname-basic-31087812-c036-4c40-a556-4be55f11bb52 Jan 9 15:11:52.098: INFO: Pod name my-hostname-basic-31087812-c036-4c40-a556-4be55f11bb52: Found 0 pods out of 1 Jan 9 15:11:57.109: INFO: Pod name my-hostname-basic-31087812-c036-4c40-a556-4be55f11bb52: Found 1 pods out of 1 Jan 9 15:11:57.109: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-31087812-c036-4c40-a556-4be55f11bb52" is running Jan 9 15:11:57.118: INFO: Pod "my-hostname-basic-31087812-c036-4c40-a556-4be55f11bb52-qjkn6" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-09 15:11:52 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-09 15:11:53 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-09 15:11:53 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-09 15:11:52 +0000 UTC Reason: Message:}]) Jan 9 15:11:57.118: INFO: Trying to dial the pod Jan 9 15:12:02.144: INFO: Controller my-hostname-basic-31087812-c036-4c40-a556-4be55f11bb52: Got expected result from replica 1 [my-hostname-basic-31087812-c036-4c40-a556-4be55f11bb52-qjkn6]: "my-hostname-basic-31087812-c036-4c40-a556-4be55f11bb52-qjkn6", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:12:02.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-8261" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":9,"skipped":169,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:11:15.083: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 Jan 9 15:11:15.131: INFO: >>> kubeConfig: /tmp/kubeconfig [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the sample API server. Jan 9 15:11:15.405: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 9 15:11:17.533: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 9, 15, 11, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 11, 15, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 9, 15, 11, 15, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 11, 15, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 9 15:12:19.954: INFO: Waited 1m0.390476521s for the sample-apiserver to be ready to handle requests. Jan 9 15:12:19.954: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"000c5fe3-616d-4309-a746-a9ae81d54ee0","resourceVersion":"5877","creationTimestamp":"2023-01-09T15:11:19Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2023-01-09T15:11:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2023-01-09T15:11:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"}]},"spec":{"service":{"namespace":"aggregator-2335","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpNd01UQTVNVFV4TVRFMVdoY05Nek13TVRBMk1UVXhNVEUxV2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURMYW1aSmdzQU9WTTlMYkdzY2xKUkE5eUk3VUFDUUpXaW1yQnoxYStONjZHb3kKYVBaM21GNGU5MmJHMGdIaG9KaFRvejYwb1VvVHZYUEhGbDEvT2dWOWZWeVUzek9zUW5EcjBLR1RWUi9IbmUxNAprYWZMdlhkcmJrYTNycEVlODk5ZktLeXkramp4eFhsSWphQndEbXJPVWd2by9NdlhudDRTdUh3Q0xVdm04djdPCm11Y0xUY20wTU54K2Y1Qmx0b3RXOVdVM0NVb2NNKzJjRy9yTjF4OERQYmFyVkhPa1lRYjB1Nkc5cjltUDgvclgKQTVwR05UMzFTVGFiWGp3bEFXbkZyWUdzditXL21UR0d1RlpJTklEdTc5TEU4ayt5ZFk4K3FBVXV4Z0dGNmNOQwozTzF5WUJmUHpma3BNSzlnRXl4VGJNV0ZUZHNNSTNPejZGUmhVMlgxQWdNQkFBR2pZVEJmTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTM1lCZzMyNnhyUEhGMlhoenkKR3MzZW42REVIekFkQmdOVkhSRUVGakFVZ2hKbE1tVXRjMlZ5ZG1WeUxXTmxjblF0WTJFd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBSHo1a0luU0hJUXA2QmtiQkFqaFdRUXpRVzA2ZmI4VldvTnhUL0RPc1dqTXRDV25xa0R4ClBXZFhJcC9lZ3l1SkpqMVo3RDBBQXpwdHR5REE4bjRoVmcxT0t5UUkyRVlkc3FvN20zTnljQ0JsMFFFVldYcXEKeDRpTlJYOVhIZ0E2NGMrWXFlSnRPNStLVlY5MGUrTkZiN29rOEdwZlMwcDF2dUJLUzhZTlpLK1p0QVhiUTFQRQpLb2l1cElnUVVUeWthb2hXR09rRm1JR28vZGxibUNSVkNSaHgvVnlpWkt6SFFGdFVvYzgwamJTODVSREpxQ3RJCkRxVFgxcVZGRWNxakdoWkhNYU85cE5aMnRyaGtBcGhBdVhJczNRSWZibkhhZFRpSEQxdUhXcjh4TDJHd3VVcWkKOUtUUzhiZkpabmIvQTg1WUlXcHZJU29WbVdoendUQU5RTzA9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2023-01-09T15:11:19Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.134.157.171:7443/apis/wardle.example.com/v1alpha1: Get \"https://10.134.157.171:7443/apis/wardle.example.com/v1alpha1\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"}]}} Jan 9 15:12:19.956: INFO: current pods: {"metadata":{"resourceVersion":"5877"},"items":[{"metadata":{"name":"sample-apiserver-deployment-7cdc9f5bf7-qmqg5","generateName":"sample-apiserver-deployment-7cdc9f5bf7-","namespace":"aggregator-2335","uid":"fe5f7eac-4a09-4484-a82c-7fca8e508d21","resourceVersion":"5695","creationTimestamp":"2023-01-09T15:11:15Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"7cdc9f5bf7"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-7cdc9f5bf7","uid":"ea5ddb1e-c8e5-42b1-bcda-85f18ea3e511","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-09T15:11:15Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ea5ddb1e-c8e5-42b1-bcda-85f18ea3e511\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-09T15:11:19Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-kqbdt","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-kqbdt","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.5.6-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-kqbdt","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:11:15Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:11:19Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:11:19Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:11:15Z"}],"hostIP":"172.18.0.7","podIP":"192.168.2.27","podIPs":[{"ip":"192.168.2.27"}],"startTime":"2023-01-09T15:11:15Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2023-01-09T15:11:18Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.5.6-0","imageID":"k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c","containerID":"containerd://2c225ae423da04bb539457427fa7fac0bc2a02e867da9c4cad69fd970c30bec1","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2023-01-09T15:11:18Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f","containerID":"containerd://ceec7089148cb46593b8e665378dc55ad80661b9bbe5fe7e68ec7630b16ea75a","started":true}],"qosClass":"BestEffort"}}]} Jan 9 15:12:19.964: INFO: logs of sample-apiserver-deployment-7cdc9f5bf7-qmqg5/sample-apiserver (error: <nil>): W0109 15:11:19.036608 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found W0109 15:11:19.036780 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found I0109 15:11:19.073112 1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder. I0109 15:11:19.073361 1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook. I0109 15:11:19.075093 1 client.go:361] parsed scheme: "endpoint" I0109 15:11:19.075634 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:11:19.135655 1 client.go:361] parsed scheme: "endpoint" I0109 15:11:19.135818 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:11:19.138268 1 client.go:361] parsed scheme: "endpoint" I0109 15:11:19.138811 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:11:19.141745 1 client.go:361] parsed scheme: "endpoint" I0109 15:11:19.141789 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:11:19.226644 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0109 15:11:19.226674 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0109 15:11:19.226735 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0109 15:11:19.226778 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0109 15:11:19.227147 1 secure_serving.go:178] Serving securely on [::]:443 I0109 15:11:19.227169 1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key I0109 15:11:19.227448 1 tlsconfig.go:219] Starting DynamicServingCertificateController I0109 15:11:19.328131 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0109 15:11:19.328247 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0109 15:11:19.552592 1 client.go:361] parsed scheme: "endpoint" I0109 15:11:19.553052 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] Jan 9 15:12:19.971: INFO: logs of sample-apiserver-deployment-7cdc9f5bf7-qmqg5/etcd (error: <nil>): {"level":"info","ts":"2023-01-09T15:11:19.000Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"]} {"level":"warn","ts":"2023-01-09T15:11:19.003Z","caller":"etcdmain/etcd.go:105","msg":"'data-dir' was empty; using default","data-dir":"default.etcd"} {"level":"info","ts":"2023-01-09T15:11:19.003Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]} {"level":"info","ts":"2023-01-09T15:11:19.005Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["http://127.0.0.1:2379"]} {"level":"info","ts":"2023-01-09T15:11:19.005Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.6","git-sha":"cecbe35ce","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":8,"max-cpu-available":8,"member-initialized":false,"name":"default","data-dir":"default.etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"default.etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":100000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://127.0.0.1:2379"],"listen-client-urls":["http://127.0.0.1:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"default=http://localhost:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2023-01-09T15:11:19.011Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"default.etcd/member/snap/db","took":"5.452824ms"} {"level":"info","ts":"2023-01-09T15:11:19.023Z","caller":"etcdserver/raft.go:494","msg":"starting local member","local-member-id":"8e9e05c52164694d","cluster-id":"cdf818194e3a8c32"} {"level":"info","ts":"2023-01-09T15:11:19.023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=()"} {"level":"info","ts":"2023-01-09T15:11:19.023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 0"} {"level":"info","ts":"2023-01-09T15:11:19.024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2023-01-09T15:11:19.024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 1"} {"level":"info","ts":"2023-01-09T15:11:19.024Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"} {"level":"warn","ts":"2023-01-09T15:11:19.030Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2023-01-09T15:11:19.035Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2023-01-09T15:11:19.039Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2023-01-09T15:11:19.042Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"8e9e05c52164694d","local-server-version":"3.5.6","cluster-version":"to_be_decided"} {"level":"info","ts":"2023-01-09T15:11:19.043Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8e9e05c52164694d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2023-01-09T15:11:19.043Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-09T15:11:19.043Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-09T15:11:19.043Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-09T15:11:19.045Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"} {"level":"info","ts":"2023-01-09T15:11:19.046Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]} {"level":"info","ts":"2023-01-09T15:11:19.045Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8e9e05c52164694d","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://127.0.0.1:2379"],"listen-client-urls":["http://127.0.0.1:2379"],"listen-metrics-urls":[]} {"level":"info","ts":"2023-01-09T15:11:19.045Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"127.0.0.1:2380"} {"level":"info","ts":"2023-01-09T15:11:19.046Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"127.0.0.1:2380"} {"level":"info","ts":"2023-01-09T15:11:19.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d is starting a new election at term 1"} {"level":"info","ts":"2023-01-09T15:11:19.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became pre-candidate at term 1"} {"level":"info","ts":"2023-01-09T15:11:19.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgPreVoteResp from 8e9e05c52164694d at term 1"} {"level":"info","ts":"2023-01-09T15:11:19.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became candidate at term 2"} {"level":"info","ts":"2023-01-09T15:11:19.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2"} {"level":"info","ts":"2023-01-09T15:11:19.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became leader at term 2"} {"level":"info","ts":"2023-01-09T15:11:19.126Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2"} {"level":"info","ts":"2023-01-09T15:11:19.128Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:11:19.129Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8e9e05c52164694d","local-member-attributes":"{Name:default ClientURLs:[http://127.0.0.1:2379]}","request-path":"/0/members/8e9e05c52164694d/attributes","cluster-id":"cdf818194e3a8c32","publish-timeout":"7s"} {"level":"info","ts":"2023-01-09T15:11:19.129Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-01-09T15:11:19.130Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2023-01-09T15:11:19.130Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2023-01-09T15:11:19.130Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:11:19.130Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:11:19.131Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:11:19.131Z","caller":"embed/serve.go:146","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"127.0.0.1:2379"} Jan 9 15:12:19.971: FAIL: gave up waiting for apiservice wardle to come up successfully Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.TestSampleAPIServer(0xc000751760, 0xc00277d398, {0xc00437dbc0, 0x3}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:384 +0x2f9a k8s.io/kubernetes/test/e2e/apimachinery.glob..func1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:101 +0x128 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000277d40, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:12:20.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-2335" for this suite. �[91m�[1m• Failure [65.281 seconds]�[0m [sig-api-machinery] Aggregator �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mShould be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:12:19.971: gave up waiting for apiservice wardle to come up successfully Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:384 �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:12:02.229: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-secret-fj2g �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 9 15:12:02.288: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fj2g" in namespace "subpath-3284" to be "Succeeded or Failed" Jan 9 15:12:02.294: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Pending", Reason="", readiness=false. Elapsed: 5.947786ms Jan 9 15:12:04.300: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Running", Reason="", readiness=true. Elapsed: 2.012549971s Jan 9 15:12:06.306: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Running", Reason="", readiness=true. Elapsed: 4.018639961s Jan 9 15:12:08.314: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Running", Reason="", readiness=true. Elapsed: 6.026138427s Jan 9 15:12:10.324: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Running", Reason="", readiness=true. Elapsed: 8.036147404s Jan 9 15:12:12.333: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Running", Reason="", readiness=true. Elapsed: 10.045653079s Jan 9 15:12:14.342: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Running", Reason="", readiness=true. Elapsed: 12.053901602s Jan 9 15:12:16.349: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Running", Reason="", readiness=true. Elapsed: 14.060665439s Jan 9 15:12:18.353: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Running", Reason="", readiness=true. Elapsed: 16.06555029s Jan 9 15:12:20.363: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Running", Reason="", readiness=true. Elapsed: 18.075377286s Jan 9 15:12:22.369: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Running", Reason="", readiness=true. Elapsed: 20.081212535s Jan 9 15:12:24.374: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Running", Reason="", readiness=false. Elapsed: 22.085802608s Jan 9 15:12:26.379: INFO: Pod "pod-subpath-test-secret-fj2g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.090673195s �[1mSTEP�[0m: Saw pod success Jan 9 15:12:26.379: INFO: Pod "pod-subpath-test-secret-fj2g" satisfied condition "Succeeded or Failed" Jan 9 15:12:26.382: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-worker-1r6syi pod pod-subpath-test-secret-fj2g container test-container-subpath-secret-fj2g: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:12:26.410: INFO: Waiting for pod pod-subpath-test-secret-fj2g to disappear Jan 9 15:12:26.414: INFO: Pod pod-subpath-test-secret-fj2g no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-secret-fj2g Jan 9 15:12:26.414: INFO: Deleting pod "pod-subpath-test-secret-fj2g" in namespace "subpath-3284" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:12:26.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-3284" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":11,"skipped":284,"failed":4,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:12:20.372: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the sample API server. Jan 9 15:12:21.690: INFO: new replicaset for deployment "sample-apiserver-deployment" is yet to be created Jan 9 15:12:23.735: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 9, 15, 12, 22, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 12, 21, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 9, 15, 12, 23, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 12, 23, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Jan 9 15:13:25.948: INFO: Waited 1m0.20332679s for the sample-apiserver to be ready to handle requests. Jan 9 15:13:25.948: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"9ae27917-0617-4243-bb81-c353291abe21","resourceVersion":"6118","creationTimestamp":"2023-01-09T15:12:25Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2023-01-09T15:12:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2023-01-09T15:12:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"}]},"spec":{"service":{"namespace":"aggregator-6371","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpNd01UQTVNVFV4TWpJeFdoY05Nek13TVRBMk1UVXhNakl4V2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURMTVZ3aTZlVmlCL3hyZ0cwcnlNS2tsTDJRNWJ1ZlhLYm9NZGRNY2JrSHFCbUQKaVhUT3dTRXNJcDhkMktoampZNS9LNjV4YlMxOU1xb0VLRmdGd1hxM0JEWDBoaG1kbVlKNjY1MkZEWnVUdjVIdgpYZ0N5enFIMFlISURZbmdMU044aTVUaXBRZE1NZW5ORWJ2S0daeW5ZbCtyV0VuSDBKc0g5UWhpbytrSVNZSGxzCkxKUFE2UVFnTkszOE9aTFd3THpaTEV0b0dHMjlvblpNUDFqN1JmVG0rUG1YMkxnWmZUTkVzZ2YwOEVHdzExRHYKUWxFY25vMXhIaEdmWlJKdEYwb0lyQXc1MTVOMGN6L3Q4bDMwUVhvQkR0VEFBRDRLSUQyV3R2U1pXTGtTWVVuTgp3VFdMaXQzNEM3NWd4emVKTzNGTURwQUxXUzFnVGorTG5XdnlnQzFkQWdNQkFBR2pZVEJmTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJSN1JIWVgvcDZPQW8rUUJRdC8KclZBdGJpbkgwekFkQmdOVkhSRUVGakFVZ2hKbE1tVXRjMlZ5ZG1WeUxXTmxjblF0WTJFd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBS2RlRm9yREQ3SlNoZm0ybklSTmE5OVRKeDRCVmxjcmdFZ2NGK1EvZzFmOUlWSHNNSTlKCmJxdk5YakpJc284U01zNWZBdW9VR1JwMlVqVE5jM2dySG9KRVRTOFBvRU51SnZZZUt5anEyUmdjWkdkZDFUaVgKcUFUaWc2b0drbzNIK3ppSHhrdHV4THY1dkpWbkxtdVhkdG5kMWhxUXZ1MS9yK3ZBRkRvYklzS2J1b29ZZHFVaQptUVNhbDZOejRFb2QwL0hTNzA4bmIxb3VJbS94TXI1Z1pXSFdqallRdHgraEZaUnY5b3NoekxtV01kWkMrcFM5Ck1la3hKOFErMkFDb0t3U2tzYitrdG1jb1JSRG9XbmNPRzhPYkEwT0NYS2QxN1h1N0pST0NlU3FFay94MlVReHAKcVVlTG1VSFBzMnZFeG5WVGdmdEJJbVZiS29BK2NEckVES0k9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2023-01-09T15:12:25Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.137.228.112:7443/apis/wardle.example.com/v1alpha1: Get \"https://10.137.228.112:7443/apis/wardle.example.com/v1alpha1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}]}} Jan 9 15:13:25.948: INFO: current pods: {"metadata":{"resourceVersion":"6118"},"items":[{"metadata":{"name":"sample-apiserver-deployment-7cdc9f5bf7-z4cwq","generateName":"sample-apiserver-deployment-7cdc9f5bf7-","namespace":"aggregator-6371","uid":"f8e2420d-749c-4774-b4fb-520de3f52985","resourceVersion":"5956","creationTimestamp":"2023-01-09T15:12:21Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"7cdc9f5bf7"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-7cdc9f5bf7","uid":"4524b0a1-9de4-46c8-ae5f-557c69a1a442","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-09T15:12:21Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4524b0a1-9de4-46c8-ae5f-557c69a1a442\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-09T15:12:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-9447s","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-9447s","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.5.6-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-9447s","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:12:21Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:12:24Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:12:24Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:12:21Z"}],"hostIP":"172.18.0.7","podIP":"192.168.2.28","podIPs":[{"ip":"192.168.2.28"}],"startTime":"2023-01-09T15:12:21Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2023-01-09T15:12:22Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.5.6-0","imageID":"k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c","containerID":"containerd://5825b965757ea466b1266ca520ea9e50bc0aa1d5c740d09781a75260e5a754af","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2023-01-09T15:12:23Z"}},"lastState":{"terminated":{"exitCode":255,"reason":"Error","startedAt":"2023-01-09T15:12:22Z","finishedAt":"2023-01-09T15:12:23Z","containerID":"containerd://1d86ab103d94fc784c3c477144fa0a6c5f2edbc1bdff8a9fbd548f2442ef2daf"}},"ready":true,"restartCount":1,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f","containerID":"containerd://7106e3df7a23246ce94ae172efe7c38816cd261d76ba900e89dc53b5e9be3264","started":true}],"qosClass":"BestEffort"}}]} Jan 9 15:13:25.955: INFO: logs of sample-apiserver-deployment-7cdc9f5bf7-z4cwq/sample-apiserver (error: <nil>): W0109 15:12:24.669408 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found W0109 15:12:24.669810 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found I0109 15:12:24.693628 1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder. I0109 15:12:24.693880 1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook. I0109 15:12:24.696211 1 client.go:361] parsed scheme: "endpoint" I0109 15:12:24.696425 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:12:24.699629 1 client.go:361] parsed scheme: "endpoint" I0109 15:12:24.699725 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:12:24.701790 1 client.go:361] parsed scheme: "endpoint" I0109 15:12:24.701826 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:12:24.702901 1 client.go:361] parsed scheme: "endpoint" I0109 15:12:24.702940 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:12:24.759235 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0109 15:12:24.759287 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0109 15:12:24.759944 1 secure_serving.go:178] Serving securely on [::]:443 I0109 15:12:24.760071 1 tlsconfig.go:219] Starting DynamicServingCertificateController I0109 15:12:24.760199 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0109 15:12:24.759453 1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key I0109 15:12:24.759812 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0109 15:12:24.860800 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0109 15:12:24.860878 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0109 15:12:24.972823 1 client.go:361] parsed scheme: "endpoint" I0109 15:12:24.972942 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] Jan 9 15:13:25.961: INFO: logs of sample-apiserver-deployment-7cdc9f5bf7-z4cwq/etcd (error: <nil>): {"level":"info","ts":"2023-01-09T15:12:22.635Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"]} {"level":"warn","ts":"2023-01-09T15:12:22.637Z","caller":"etcdmain/etcd.go:105","msg":"'data-dir' was empty; using default","data-dir":"default.etcd"} {"level":"info","ts":"2023-01-09T15:12:22.637Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]} {"level":"info","ts":"2023-01-09T15:12:22.639Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["http://127.0.0.1:2379"]} {"level":"info","ts":"2023-01-09T15:12:22.639Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.6","git-sha":"cecbe35ce","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":8,"max-cpu-available":8,"member-initialized":false,"name":"default","data-dir":"default.etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"default.etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":100000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://127.0.0.1:2379"],"listen-client-urls":["http://127.0.0.1:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"default=http://localhost:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2023-01-09T15:12:22.644Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"default.etcd/member/snap/db","took":"4.176131ms"} {"level":"info","ts":"2023-01-09T15:12:22.650Z","caller":"etcdserver/raft.go:494","msg":"starting local member","local-member-id":"8e9e05c52164694d","cluster-id":"cdf818194e3a8c32"} {"level":"info","ts":"2023-01-09T15:12:22.650Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=()"} {"level":"info","ts":"2023-01-09T15:12:22.650Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 0"} {"level":"info","ts":"2023-01-09T15:12:22.650Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2023-01-09T15:12:22.650Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 1"} {"level":"info","ts":"2023-01-09T15:12:22.651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"} {"level":"warn","ts":"2023-01-09T15:12:22.654Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2023-01-09T15:12:22.660Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2023-01-09T15:12:22.662Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2023-01-09T15:12:22.664Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"8e9e05c52164694d","local-server-version":"3.5.6","cluster-version":"to_be_decided"} {"level":"info","ts":"2023-01-09T15:12:22.664Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-09T15:12:22.664Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-09T15:12:22.664Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-09T15:12:22.664Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8e9e05c52164694d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2023-01-09T15:12:22.667Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"} {"level":"info","ts":"2023-01-09T15:12:22.667Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]} {"level":"info","ts":"2023-01-09T15:12:22.669Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"127.0.0.1:2380"} {"level":"info","ts":"2023-01-09T15:12:22.669Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8e9e05c52164694d","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://127.0.0.1:2379"],"listen-client-urls":["http://127.0.0.1:2379"],"listen-metrics-urls":[]} {"level":"info","ts":"2023-01-09T15:12:22.669Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"127.0.0.1:2380"} {"level":"info","ts":"2023-01-09T15:12:23.651Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d is starting a new election at term 1"} {"level":"info","ts":"2023-01-09T15:12:23.652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became pre-candidate at term 1"} {"level":"info","ts":"2023-01-09T15:12:23.652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgPreVoteResp from 8e9e05c52164694d at term 1"} {"level":"info","ts":"2023-01-09T15:12:23.652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became candidate at term 2"} {"level":"info","ts":"2023-01-09T15:12:23.652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2"} {"level":"info","ts":"2023-01-09T15:12:23.652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became leader at term 2"} {"level":"info","ts":"2023-01-09T15:12:23.652Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2"} {"level":"info","ts":"2023-01-09T15:12:23.653Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:12:23.654Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8e9e05c52164694d","local-member-attributes":"{Name:default ClientURLs:[http://127.0.0.1:2379]}","request-path":"/0/members/8e9e05c52164694d/attributes","cluster-id":"cdf818194e3a8c32","publish-timeout":"7s"} {"level":"info","ts":"2023-01-09T15:12:23.654Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-01-09T15:12:23.655Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:12:23.655Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:12:23.655Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:12:23.655Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2023-01-09T15:12:23.655Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2023-01-09T15:12:23.656Z","caller":"embed/serve.go:146","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"127.0.0.1:2379"} Jan 9 15:13:25.961: FAIL: gave up waiting for apiservice wardle to come up successfully Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.TestSampleAPIServer(0xc000751760, 0xc00277d398, {0xc004dead80, 0x3}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:384 +0x2f9a k8s.io/kubernetes/test/e2e/apimachinery.glob..func1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:101 +0x128 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000277d40, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:13:26.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-6371" for this suite. �[91m�[1m• Failure [65.908 seconds]�[0m [sig-api-machinery] Aggregator �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mShould be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:13:25.961: gave up waiting for apiservice wardle to come up successfully Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:384 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":11,"skipped":284,"failed":5,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:13:26.286: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename aggregator �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77 [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the sample API server. Jan 9 15:13:26.839: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jan 9 15:13:28.911: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 9, 15, 13, 27, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 13, 26, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"sample-apiserver-deployment-7cdc9f5bf7\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 9, 15, 13, 28, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 9, 15, 13, 28, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Jan 9 15:14:31.126: INFO: Waited 1m0.202154295s for the sample-apiserver to be ready to handle requests. Jan 9 15:14:31.126: INFO: current APIService: {"metadata":{"name":"v1alpha1.wardle.example.com","uid":"500610ce-3f14-42a7-93f1-429d7834f1da","resourceVersion":"6301","creationTimestamp":"2023-01-09T15:13:30Z","managedFields":[{"manager":"e2e.test","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2023-01-09T15:13:30Z","fieldsType":"FieldsV1","fieldsV1":{"f:spec":{"f:caBundle":{},"f:group":{},"f:groupPriorityMinimum":{},"f:service":{".":{},"f:name":{},"f:namespace":{},"f:port":{}},"f:version":{},"f:versionPriority":{}}}},{"manager":"kube-apiserver","operation":"Update","apiVersion":"apiregistration.k8s.io/v1","time":"2023-01-09T15:13:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}},"subresource":"status"}]},"spec":{"service":{"namespace":"aggregator-1562","name":"sample-api","port":7443},"group":"wardle.example.com","version":"v1alpha1","caBundle":"LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURGakNDQWY2Z0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFkTVJzd0dRWURWUVFERXhKbE1tVXQKYzJWeWRtVnlMV05sY25RdFkyRXdIaGNOTWpNd01UQTVNVFV4TXpJMldoY05Nek13TVRBMk1UVXhNekkyV2pBZApNUnN3R1FZRFZRUURFeEpsTW1VdGMyVnlkbVZ5TFdObGNuUXRZMkV3Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBCkE0SUJEd0F3Z2dFS0FvSUJBUURVdFkxY0I2YU8zZHVXS2g5V0t5Z0RUTDFOT2s2SWtuRXpJKzczRnUvU1NXZHoKRXVLM2p6WGVlTUwyR0RZU1ViZkNrWWpBUlpwOUhVb2I3cmVwM3l6SlBiMnZkYmUrUGxKN0tFYWZmdnpvTkQ1dwo4cmlnWG5hMTBEaWRoV2RJeXlQczFCYmdOUjcrVEhhZTQxSlM0U3VXU2lCSXBrTUdRVFp3RmVmeFI2VHJxNTB0CjMranJoLy8xOVpxYXg5KzJjOStTWmV3MzNESVFlQS9XaUV1dE9kYkxIVW1JMUY4M0hQMHl4QVBzMUpxNklGMEwKNlgxdU5NOWJOdVluQ0phL3YxdURxeGJWTkFBd0U4Z2NVYVUzT0dFalRwNms3OS9iOE40MXg4Q1NYaWQ1VXNiawpGUFpIMkxCM1FkYWxUVllDUy9zM2NJODZuRzFVdmxTdklob054RXJUQWdNQkFBR2pZVEJmTUE0R0ExVWREd0VCCi93UUVBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJUeG9SR2FzUjhWWm8vc3ZTWk0KMUwxTlJLRFRUVEFkQmdOVkhSRUVGakFVZ2hKbE1tVXRjMlZ5ZG1WeUxXTmxjblF0WTJFd0RRWUpLb1pJaHZjTgpBUUVMQlFBRGdnRUJBTFEzSUdBUytENVVpOHR1eEFRTEd5WU1PZFB6YXh5MWhZQzNoR0UybzQxVlB1SFZ1aWlTCnBhc3duaFI2aFpoTUNjYXBoQWl3eEYyejgyUW9JQkY3aHZENFF6YllGL0NOTFZESCtJWEFJVUJxRXV4bDRiKzMKUURtWUlaNmVBUWFTZmIwbkhRNDhDeStsM3VVdmY2cys1Zk9QLzB0M3o5ejcrbWh0YURLeE1uRmx0ekFuVmFzaApWaERNOTBFUWlndHhDN0dQYmlwSUNNLytObHBlQWJCVE50U2gzQ0Yzd0NrKzBES0hob0tvQ1k1TVkzb2hhNCtyCmRhRVgyVGdnUXI5b0x0b3pvTVVFY1Rsd1puZ2tFSi9ySHU4WWQ2RFVxL042Z2RTK0Uxdndvc0grb3ZmZGRidmQKeDMzc1gxOVUyUDE2Qzc2WHRGNEV0MWJWMzNEcVVTZ1dBTE09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K","groupPriorityMinimum":2000,"versionPriority":200},"status":{"conditions":[{"type":"Available","status":"False","lastTransitionTime":"2023-01-09T15:13:30Z","reason":"FailedDiscoveryCheck","message":"failing or missing response from https://10.129.112.213:7443/apis/wardle.example.com/v1alpha1: Get \"https://10.129.112.213:7443/apis/wardle.example.com/v1alpha1\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"}]}} Jan 9 15:14:31.126: INFO: current pods: {"metadata":{"resourceVersion":"6311"},"items":[{"metadata":{"name":"sample-apiserver-deployment-7cdc9f5bf7-l7mbb","generateName":"sample-apiserver-deployment-7cdc9f5bf7-","namespace":"aggregator-1562","uid":"b00e88bf-3987-4eb9-8019-d6d99acc772c","resourceVersion":"6187","creationTimestamp":"2023-01-09T15:13:26Z","labels":{"apiserver":"true","app":"sample-apiserver","pod-template-hash":"7cdc9f5bf7"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"sample-apiserver-deployment-7cdc9f5bf7","uid":"816b7c7b-ac6e-4dce-a336-eaa7c53346bc","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-01-09T15:13:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:apiserver":{},"f:app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"816b7c7b-ac6e-4dce-a336-eaa7c53346bc\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"etcd\"}":{".":{},"f:command":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}},"k:{\"name\":\"sample-apiserver\"}":{".":{},"f:args":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{},"f:volumeMounts":{".":{},"k:{\"mountPath\":\"/apiserver.local.config/certificates\"}":{".":{},"f:mountPath":{},"f:name":{},"f:readOnly":{}}}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{},"f:volumes":{".":{},"k:{\"name\":\"apiserver-certs\"}":{".":{},"f:name":{},"f:secret":{".":{},"f:defaultMode":{},"f:secretName":{}}}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-01-09T15:13:29Z","fieldsType":"FieldsV1","fieldsV1":{"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.29\"}":{".":{},"f:ip":{}}},"f:startTime":{}}},"subresource":"status"}]},"spec":{"volumes":[{"name":"apiserver-certs","secret":{"secretName":"sample-apiserver-secret","defaultMode":420}},{"name":"kube-api-access-vr4gn","projected":{"sources":[{"serviceAccountToken":{"expirationSeconds":3607,"path":"token"}},{"configMap":{"name":"kube-root-ca.crt","items":[{"key":"ca.crt","path":"ca.crt"}]}},{"downwardAPI":{"items":[{"path":"namespace","fieldRef":{"apiVersion":"v1","fieldPath":"metadata.namespace"}}]}}],"defaultMode":420}}],"containers":[{"name":"sample-apiserver","image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5","args":["--etcd-servers=http://127.0.0.1:2379","--tls-cert-file=/apiserver.local.config/certificates/tls.crt","--tls-private-key-file=/apiserver.local.config/certificates/tls.key","--audit-log-path=-","--audit-log-maxage=0","--audit-log-maxbackup=0"],"resources":{},"volumeMounts":[{"name":"apiserver-certs","readOnly":true,"mountPath":"/apiserver.local.config/certificates"},{"name":"kube-api-access-vr4gn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"},{"name":"etcd","image":"k8s.gcr.io/etcd:3.5.6-0","command":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"],"resources":{},"volumeMounts":[{"name":"kube-api-access-vr4gn","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":0,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true,"preemptionPolicy":"PreemptLowerPriority"},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:13:26Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:13:29Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:13:29Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2023-01-09T15:13:26Z"}],"hostIP":"172.18.0.7","podIP":"192.168.2.29","podIPs":[{"ip":"192.168.2.29"}],"startTime":"2023-01-09T15:13:26Z","containerStatuses":[{"name":"etcd","state":{"running":{"startedAt":"2023-01-09T15:13:27Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"k8s.gcr.io/etcd:3.5.6-0","imageID":"k8s.gcr.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c","containerID":"containerd://992cd8f7ec352b2ee28578959d51da6cb7cf1039b959e86a5bea3f177319d801","started":true},{"name":"sample-apiserver","state":{"running":{"startedAt":"2023-01-09T15:13:29Z"}},"lastState":{"terminated":{"exitCode":255,"reason":"Error","startedAt":"2023-01-09T15:13:27Z","finishedAt":"2023-01-09T15:13:27Z","containerID":"containerd://a922daccef0e8421cb248d62d6026d99d261c346d181889be636908a7efdce7d"}},"ready":true,"restartCount":1,"image":"k8s.gcr.io/e2e-test-images/sample-apiserver:1.17.5","imageID":"k8s.gcr.io/e2e-test-images/sample-apiserver@sha256:f9c93b92b6ff750b41a93c4e4fe0bfe384597aeb841e2539d5444815c55b2d8f","containerID":"containerd://c64c232d7c39488d8f836f8ee1176eedf7f716a0310ebf62dbce6e9c34d8dcd8","started":true}],"qosClass":"BestEffort"}}]} Jan 9 15:14:31.132: INFO: logs of sample-apiserver-deployment-7cdc9f5bf7-l7mbb/sample-apiserver (error: <nil>): W0109 15:13:29.658853 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found W0109 15:13:29.659016 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found I0109 15:13:29.675500 1 plugins.go:158] Loaded 3 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,MutatingAdmissionWebhook,BanFlunder. I0109 15:13:29.675570 1 plugins.go:161] Loaded 1 validating admission controller(s) successfully in the following order: ValidatingAdmissionWebhook. I0109 15:13:29.679449 1 client.go:361] parsed scheme: "endpoint" I0109 15:13:29.679555 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:13:29.680589 1 client.go:361] parsed scheme: "endpoint" I0109 15:13:29.680621 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:13:29.682328 1 client.go:361] parsed scheme: "endpoint" I0109 15:13:29.682442 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:13:29.683600 1 client.go:361] parsed scheme: "endpoint" I0109 15:13:29.683747 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] I0109 15:13:29.733953 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0109 15:13:29.733959 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0109 15:13:29.734058 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0109 15:13:29.734078 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0109 15:13:29.734022 1 dynamic_serving_content.go:129] Starting serving-cert::/apiserver.local.config/certificates/tls.crt::/apiserver.local.config/certificates/tls.key I0109 15:13:29.734130 1 secure_serving.go:178] Serving securely on [::]:443 I0109 15:13:29.734537 1 tlsconfig.go:219] Starting DynamicServingCertificateController I0109 15:13:29.835082 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0109 15:13:29.835612 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file I0109 15:13:30.132894 1 client.go:361] parsed scheme: "endpoint" I0109 15:13:30.132988 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{http://127.0.0.1:2379 0 <nil>}] Jan 9 15:14:31.138: INFO: logs of sample-apiserver-deployment-7cdc9f5bf7-l7mbb/etcd (error: <nil>): {"level":"info","ts":"2023-01-09T15:13:27.696Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["/usr/local/bin/etcd","--listen-client-urls","http://127.0.0.1:2379","--advertise-client-urls","http://127.0.0.1:2379"]} {"level":"warn","ts":"2023-01-09T15:13:27.696Z","caller":"etcdmain/etcd.go:105","msg":"'data-dir' was empty; using default","data-dir":"default.etcd"} {"level":"info","ts":"2023-01-09T15:13:27.696Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["http://localhost:2380"]} {"level":"info","ts":"2023-01-09T15:13:27.700Z","caller":"embed/etcd.go:132","msg":"configuring client listeners","listen-client-urls":["http://127.0.0.1:2379"]} {"level":"info","ts":"2023-01-09T15:13:27.700Z","caller":"embed/etcd.go:306","msg":"starting an etcd server","etcd-version":"3.5.6","git-sha":"cecbe35ce","go-version":"go1.16.15","go-os":"linux","go-arch":"amd64","max-cpu-set":8,"max-cpu-available":8,"member-initialized":false,"name":"default","data-dir":"default.etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"default.etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":100000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://127.0.0.1:2379"],"listen-client-urls":["http://127.0.0.1:2379"],"listen-metrics-urls":[],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"default=http://localhost:2380","initial-cluster-state":"new","initial-cluster-token":"etcd-cluster","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2023-01-09T15:13:27.705Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"default.etcd/member/snap/db","took":"3.711056ms"} {"level":"info","ts":"2023-01-09T15:13:27.712Z","caller":"etcdserver/raft.go:494","msg":"starting local member","local-member-id":"8e9e05c52164694d","cluster-id":"cdf818194e3a8c32"} {"level":"info","ts":"2023-01-09T15:13:27.712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=()"} {"level":"info","ts":"2023-01-09T15:13:27.713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 0"} {"level":"info","ts":"2023-01-09T15:13:27.713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 8e9e05c52164694d [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]"} {"level":"info","ts":"2023-01-09T15:13:27.713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became follower at term 1"} {"level":"info","ts":"2023-01-09T15:13:27.713Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"} {"level":"warn","ts":"2023-01-09T15:13:27.717Z","caller":"auth/store.go:1234","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2023-01-09T15:13:27.722Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":1} {"level":"info","ts":"2023-01-09T15:13:27.724Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2023-01-09T15:13:27.726Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"8e9e05c52164694d","local-server-version":"3.5.6","cluster-version":"to_be_decided"} {"level":"info","ts":"2023-01-09T15:13:27.727Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"8e9e05c52164694d","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2023-01-09T15:13:27.727Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-09T15:13:27.727Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-09T15:13:27.727Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"default.etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2023-01-09T15:13:27.728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d switched to configuration voters=(10276657743932975437)"} {"level":"info","ts":"2023-01-09T15:13:27.728Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","added-peer-id":"8e9e05c52164694d","added-peer-peer-urls":["http://localhost:2380"]} {"level":"info","ts":"2023-01-09T15:13:27.729Z","caller":"embed/etcd.go:275","msg":"now serving peer/client/metrics","local-member-id":"8e9e05c52164694d","initial-advertise-peer-urls":["http://localhost:2380"],"listen-peer-urls":["http://localhost:2380"],"advertise-client-urls":["http://127.0.0.1:2379"],"listen-client-urls":["http://127.0.0.1:2379"],"listen-metrics-urls":[]} {"level":"info","ts":"2023-01-09T15:13:27.729Z","caller":"embed/etcd.go:586","msg":"serving peer traffic","address":"127.0.0.1:2380"} {"level":"info","ts":"2023-01-09T15:13:27.729Z","caller":"embed/etcd.go:558","msg":"cmux::serve","address":"127.0.0.1:2380"} {"level":"info","ts":"2023-01-09T15:13:28.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d is starting a new election at term 1"} {"level":"info","ts":"2023-01-09T15:13:28.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became pre-candidate at term 1"} {"level":"info","ts":"2023-01-09T15:13:28.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgPreVoteResp from 8e9e05c52164694d at term 1"} {"level":"info","ts":"2023-01-09T15:13:28.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became candidate at term 2"} {"level":"info","ts":"2023-01-09T15:13:28.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 2"} {"level":"info","ts":"2023-01-09T15:13:28.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e9e05c52164694d became leader at term 2"} {"level":"info","ts":"2023-01-09T15:13:28.414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 2"} {"level":"info","ts":"2023-01-09T15:13:28.416Z","caller":"etcdserver/server.go:2563","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:13:28.417Z","caller":"embed/serve.go:100","msg":"ready to serve client requests"} {"level":"info","ts":"2023-01-09T15:13:28.417Z","caller":"etcdserver/server.go:2054","msg":"published local member to cluster through raft","local-member-id":"8e9e05c52164694d","local-member-attributes":"{Name:default ClientURLs:[http://127.0.0.1:2379]}","request-path":"/0/members/8e9e05c52164694d/attributes","cluster-id":"cdf818194e3a8c32","publish-timeout":"7s"} {"level":"info","ts":"2023-01-09T15:13:28.417Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2023-01-09T15:13:28.417Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2023-01-09T15:13:28.418Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cdf818194e3a8c32","local-member-id":"8e9e05c52164694d","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:13:28.418Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:13:28.418Z","caller":"etcdserver/server.go:2587","msg":"cluster version is updated","cluster-version":"3.5"} {"level":"info","ts":"2023-01-09T15:13:28.418Z","caller":"embed/serve.go:146","msg":"serving client traffic insecurely; this is strongly discouraged!","address":"127.0.0.1:2379"} Jan 9 15:14:31.138: FAIL: gave up waiting for apiservice wardle to come up successfully Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.TestSampleAPIServer(0xc000751760, 0xc00277d398, {0xc000656640, 0x3}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:384 +0x2f9a k8s.io/kubernetes/test/e2e/apimachinery.glob..func1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:101 +0x128 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000277d40, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68 [AfterEach] [sig-api-machinery] Aggregator /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:14:31.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "aggregator-1562" for this suite. �[91m�[1m• Failure [65.218 seconds]�[0m [sig-api-machinery] Aggregator �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mShould be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:14:31.138: gave up waiting for apiservice wardle to come up successfully Unexpected error: <*errors.errorString | 0xc0002462c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:384 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":-1,"completed":11,"skipped":284,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:14:31.510: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-8325 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a new StatefulSet Jan 9 15:14:31.571: INFO: Found 0 stateful pods, waiting for 3 Jan 9 15:14:41.581: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 15:14:41.581: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 15:14:41.581: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Jan 9 15:14:41.611: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Not applying an update when the partition is greater than the number of replicas �[1mSTEP�[0m: Performing a canary update Jan 9 15:14:51.650: INFO: Updating stateful set ss2 Jan 9 15:14:51.657: INFO: Waiting for Pod statefulset-8325/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb �[1mSTEP�[0m: Restoring Pods to the correct revision when they are deleted Jan 9 15:15:01.695: INFO: Found 1 stateful pods, waiting for 3 Jan 9 15:15:11.701: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 15:15:11.702: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 9 15:15:11.702: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Performing a phased rolling update Jan 9 15:15:11.733: INFO: Updating stateful set ss2 Jan 9 15:15:11.744: INFO: Waiting for Pod statefulset-8325/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 9 15:15:21.780: INFO: Updating stateful set ss2 Jan 9 15:15:21.789: INFO: Waiting for StatefulSet statefulset-8325/ss2 to complete update Jan 9 15:15:21.789: INFO: Waiting for Pod statefulset-8325/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 9 15:15:31.798: INFO: Deleting all statefulset in ns statefulset-8325 Jan 9 15:15:31.802: INFO: Scaling statefulset ss2 to 0 Jan 9 15:15:41.822: INFO: Waiting for statefulset status.replicas updated to 0 Jan 9 15:15:41.827: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:15:41.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8325" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":12,"skipped":285,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:15:41.924: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io/v1 discovery document �[1mSTEP�[0m: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:15:41.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-3723" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":13,"skipped":315,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":276,"failed":1,"failures":["[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]"]} [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:09:41.655: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-1438 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Jan 9 15:09:41.721: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jan 9 15:09:41.929: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:09:43.939: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:09:45.939: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:09:47.978: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:09:49.938: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:09:51.937: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:09:53.938: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:09:55.939: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:09:57.934: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:09:59.937: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:10:01.937: INFO: The status of Pod netserver-0 is Running (Ready = false) Jan 9 15:10:03.936: INFO: The status of Pod netserver-0 is Running (Ready = true) Jan 9 15:10:03.948: INFO: The status of Pod netserver-1 is Running (Ready = true) Jan 9 15:10:03.961: INFO: The status of Pod netserver-2 is Running (Ready = true) Jan 9 15:10:03.975: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Jan 9 15:10:06.010: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Jan 9 15:10:06.010: INFO: Breadth first check of 192.168.0.16 on host 172.18.0.4... Jan 9 15:10:06.016: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.0.16&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:10:06.016: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:10:06.017: INFO: ExecWithOptions: Clientset creation Jan 9 15:10:06.017: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.0.16%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:10:06.157: INFO: Waiting for responses: map[] Jan 9 15:10:06.157: INFO: reached 192.168.0.16 after 0/1 tries Jan 9 15:10:06.157: INFO: Breadth first check of 192.168.1.26 on host 172.18.0.6... Jan 9 15:10:06.162: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.1.26&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:10:06.162: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:10:06.163: INFO: ExecWithOptions: Clientset creation Jan 9 15:10:06.163: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.1.26%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:10:06.309: INFO: Waiting for responses: map[] Jan 9 15:10:06.309: INFO: reached 192.168.1.26 after 0/1 tries Jan 9 15:10:06.309: INFO: Breadth first check of 192.168.6.15 on host 172.18.0.5... Jan 9 15:10:06.315: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.6.15&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:10:06.315: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:10:06.317: INFO: ExecWithOptions: Clientset creation Jan 9 15:10:06.317: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.6.15%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:10:06.445: INFO: Waiting for responses: map[] Jan 9 15:10:06.445: INFO: reached 192.168.6.15 after 0/1 tries Jan 9 15:10:06.445: INFO: Breadth first check of 192.168.2.23 on host 172.18.0.7... Jan 9 15:10:06.451: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:10:06.451: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:10:06.452: INFO: ExecWithOptions: Clientset creation Jan 9 15:10:06.452: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:10:11.593: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:10:13.593: INFO: Output of kubectl describe pod pod-network-test-1438/netserver-0: Jan 9 15:10:13.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-1438 describe pod netserver-0 --namespace=pod-network-test-1438' Jan 9 15:10:13.849: INFO: stderr: "" Jan 9 15:10:13.849: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-1438\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv/172.18.0.4\nStart Time: Mon, 09 Jan 2023 15:09:41 +0000\nLabels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.16\nIPs:\n IP: 192.168.0.16\nContainers:\n webserver:\n Container ID: containerd://2d80836850723478af5ea5f0967597306a94141c9c36b864bf629267762d4f0a\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:09:43 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r7gmf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-r7gmf:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 32s default-scheduler Successfully assigned pod-network-test-1438/netserver-0 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv\n Normal Pulled 31s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 31s kubelet Created container webserver\n Normal Started 30s kubelet Started container webserver\n" Jan 9 15:10:13.849: INFO: Name: netserver-0 Namespace: pod-network-test-1438 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv/172.18.0.4 Start Time: Mon, 09 Jan 2023 15:09:41 +0000 Labels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true Annotations: <none> Status: Running IP: 192.168.0.16 IPs: IP: 192.168.0.16 Containers: webserver: Container ID: containerd://2d80836850723478af5ea5f0967597306a94141c9c36b864bf629267762d4f0a Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:09:43 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r7gmf (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-r7gmf: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32s default-scheduler Successfully assigned pod-network-test-1438/netserver-0 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv Normal Pulled 31s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 31s kubelet Created container webserver Normal Started 30s kubelet Started container webserver Jan 9 15:10:13.849: INFO: Output of kubectl describe pod pod-network-test-1438/netserver-1: Jan 9 15:10:13.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-1438 describe pod netserver-1 --namespace=pod-network-test-1438' Jan 9 15:10:14.120: INFO: stderr: "" Jan 9 15:10:14.120: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-1438\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv/172.18.0.6\nStart Time: Mon, 09 Jan 2023 15:09:41 +0000\nLabels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.26\nIPs:\n IP: 192.168.1.26\nContainers:\n webserver:\n Container ID: containerd://5f8ce787ac5ac166716257885d465e731a659d3e5891734cc269f98d1da5741e\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:09:43 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvrd5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-qvrd5:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 32s default-scheduler Successfully assigned pod-network-test-1438/netserver-1 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv\n Normal Pulled 32s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 32s kubelet Created container webserver\n Normal Started 31s kubelet Started container webserver\n" Jan 9 15:10:14.121: INFO: Name: netserver-1 Namespace: pod-network-test-1438 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv/172.18.0.6 Start Time: Mon, 09 Jan 2023 15:09:41 +0000 Labels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true Annotations: <none> Status: Running IP: 192.168.1.26 IPs: IP: 192.168.1.26 Containers: webserver: Container ID: containerd://5f8ce787ac5ac166716257885d465e731a659d3e5891734cc269f98d1da5741e Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:09:43 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvrd5 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-qvrd5: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32s default-scheduler Successfully assigned pod-network-test-1438/netserver-1 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv Normal Pulled 32s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 32s kubelet Created container webserver Normal Started 31s kubelet Started container webserver Jan 9 15:10:14.121: INFO: Output of kubectl describe pod pod-network-test-1438/netserver-2: Jan 9 15:10:14.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-1438 describe pod netserver-2 --namespace=pod-network-test-1438' Jan 9 15:10:14.393: INFO: stderr: "" Jan 9 15:10:14.393: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-1438\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-worker-1r6syi/172.18.0.5\nStart Time: Mon, 09 Jan 2023 15:09:41 +0000\nLabels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.15\nIPs:\n IP: 192.168.6.15\nContainers:\n webserver:\n Container ID: containerd://f31aa29025fd7440186d2dd5be2dd92e4ddf5167da64a08733a2d8d81046d60d\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:09:43 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wp2hw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-wp2hw:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-1r6syi\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 32s default-scheduler Successfully assigned pod-network-test-1438/netserver-2 to k8s-upgrade-and-conformance-viu2kk-worker-1r6syi\n Normal Pulled 32s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 32s kubelet Created container webserver\n Normal Started 31s kubelet Started container webserver\n" Jan 9 15:10:14.393: INFO: Name: netserver-2 Namespace: pod-network-test-1438 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-worker-1r6syi/172.18.0.5 Start Time: Mon, 09 Jan 2023 15:09:41 +0000 Labels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true Annotations: <none> Status: Running IP: 192.168.6.15 IPs: IP: 192.168.6.15 Containers: webserver: Container ID: containerd://f31aa29025fd7440186d2dd5be2dd92e4ddf5167da64a08733a2d8d81046d60d Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:09:43 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wp2hw (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-wp2hw: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-1r6syi Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32s default-scheduler Successfully assigned pod-network-test-1438/netserver-2 to k8s-upgrade-and-conformance-viu2kk-worker-1r6syi Normal Pulled 32s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 32s kubelet Created container webserver Normal Started 31s kubelet Started container webserver Jan 9 15:10:14.393: INFO: Output of kubectl describe pod pod-network-test-1438/netserver-3: Jan 9 15:10:14.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-1438 describe pod netserver-3 --namespace=pod-network-test-1438' Jan 9 15:10:14.627: INFO: stderr: "" Jan 9 15:10:14.628: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-1438\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b/172.18.0.7\nStart Time: Mon, 09 Jan 2023 15:09:41 +0000\nLabels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.2.23\nIPs:\n IP: 192.168.2.23\nContainers:\n webserver:\n Container ID: containerd://02e7a4545ef6a352576f956fa805ade71eba123b68947fea90b5feb9c27423ac\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:09:43 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-np2px (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-np2px:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 32s default-scheduler Successfully assigned pod-network-test-1438/netserver-3 to k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b\n Normal Pulled 32s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 32s kubelet Created container webserver\n Normal Started 31s kubelet Started container webserver\n" Jan 9 15:10:14.628: INFO: Name: netserver-3 Namespace: pod-network-test-1438 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b/172.18.0.7 Start Time: Mon, 09 Jan 2023 15:09:41 +0000 Labels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true Annotations: <none> Status: Running IP: 192.168.2.23 IPs: IP: 192.168.2.23 Containers: webserver: Container ID: containerd://02e7a4545ef6a352576f956fa805ade71eba123b68947fea90b5feb9c27423ac Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:09:43 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-np2px (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-np2px: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 32s default-scheduler Successfully assigned pod-network-test-1438/netserver-3 to k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b Normal Pulled 32s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 32s kubelet Created container webserver Normal Started 31s kubelet Started container webserver Jan 9 15:10:14.628: INFO: encountered error during dial (did not find expected responses... Tries 1 Command curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1' retrieved map[] expected map[netserver-3:{}]) Jan 9 15:10:14.628: INFO: ...failed...will try again in next pass Jan 9 15:10:14.628: INFO: Going to retry 1 out of 4 pods.... Jan 9 15:10:14.628: INFO: Doublechecking 1 pods in host 172.18.0.7 which weren't seen the first time. Jan 9 15:10:14.628: INFO: Now attempting to probe pod [[[ 192.168.2.23 ]]] Jan 9 15:10:14.638: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:10:14.638: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:10:14.639: INFO: ExecWithOptions: Clientset creation Jan 9 15:10:14.639: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:10:19.825: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:10:21.834: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:10:21.834: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:10:21.836: INFO: ExecWithOptions: Clientset creation Jan 9 15:10:21.836: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:10:26.989: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:10:28.997: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:10:28.998: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:10:28.999: INFO: ExecWithOptions: Clientset creation Jan 9 15:10:28.999: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:10:34.158: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:10:36.169: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:10:36.169: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:10:36.172: INFO: ExecWithOptions: Clientset creation Jan 9 15:10:36.172: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:10:41.346: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:10:43.353: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:10:43.353: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:10:43.355: INFO: ExecWithOptions: Clientset creation Jan 9 15:10:43.355: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:10:48.536: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:10:50.549: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:10:50.549: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:10:50.551: INFO: ExecWithOptions: Clientset creation Jan 9 15:10:50.552: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:10:55.756: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:10:57.763: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:10:57.763: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:10:57.764: INFO: ExecWithOptions: Clientset creation Jan 9 15:10:57.764: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:11:02.933: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:11:04.942: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:11:04.942: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:11:04.944: INFO: ExecWithOptions: Clientset creation Jan 9 15:11:04.944: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:11:10.137: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:11:12.146: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:11:12.146: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:11:12.148: INFO: ExecWithOptions: Clientset creation Jan 9 15:11:12.148: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:11:17.299: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:11:19.306: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:11:19.306: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:11:19.308: INFO: ExecWithOptions: Clientset creation Jan 9 15:11:19.309: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:11:24.493: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:11:26.503: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:11:26.503: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:11:26.504: INFO: ExecWithOptions: Clientset creation Jan 9 15:11:26.504: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:11:31.663: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:11:33.671: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:11:33.671: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:11:33.673: INFO: ExecWithOptions: Clientset creation Jan 9 15:11:33.673: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:11:38.835: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:11:40.847: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:11:40.847: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:11:40.850: INFO: ExecWithOptions: Clientset creation Jan 9 15:11:40.850: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:11:46.060: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:11:48.065: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:11:48.065: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:11:48.067: INFO: ExecWithOptions: Clientset creation Jan 9 15:11:48.067: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:11:53.225: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:11:55.232: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:11:55.232: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:11:55.233: INFO: ExecWithOptions: Clientset creation Jan 9 15:11:55.233: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:12:00.385: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:12:02.391: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:12:02.391: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:12:02.392: INFO: ExecWithOptions: Clientset creation Jan 9 15:12:02.393: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:12:07.526: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:12:09.538: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:12:09.538: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:12:09.539: INFO: ExecWithOptions: Clientset creation Jan 9 15:12:09.539: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:12:14.702: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:12:16.706: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:12:16.706: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:12:16.707: INFO: ExecWithOptions: Clientset creation Jan 9 15:12:16.707: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:12:21.821: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:12:23.827: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:12:23.827: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:12:23.828: INFO: ExecWithOptions: Clientset creation Jan 9 15:12:23.828: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:12:28.937: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:12:30.943: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:12:30.943: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:12:30.945: INFO: ExecWithOptions: Clientset creation Jan 9 15:12:30.945: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:12:36.033: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:12:38.038: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:12:38.038: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:12:38.039: INFO: ExecWithOptions: Clientset creation Jan 9 15:12:38.039: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:12:43.123: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:12:45.128: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:12:45.128: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:12:45.128: INFO: ExecWithOptions: Clientset creation Jan 9 15:12:45.129: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:12:50.214: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:12:52.218: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:12:52.218: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:12:52.219: INFO: ExecWithOptions: Clientset creation Jan 9 15:12:52.219: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:12:57.304: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:12:59.309: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:12:59.309: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:12:59.310: INFO: ExecWithOptions: Clientset creation Jan 9 15:12:59.310: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:13:04.408: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:13:06.416: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:13:06.416: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:13:06.416: INFO: ExecWithOptions: Clientset creation Jan 9 15:13:06.416: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:13:11.506: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:13:13.510: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:13:13.510: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:13:13.511: INFO: ExecWithOptions: Clientset creation Jan 9 15:13:13.511: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:13:18.594: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:13:20.599: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:13:20.599: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:13:20.600: INFO: ExecWithOptions: Clientset creation Jan 9 15:13:20.600: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:13:25.685: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:13:27.689: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:13:27.689: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:13:27.690: INFO: ExecWithOptions: Clientset creation Jan 9 15:13:27.690: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:13:32.775: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:13:34.780: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:13:34.780: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:13:34.781: INFO: ExecWithOptions: Clientset creation Jan 9 15:13:34.781: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:13:39.868: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:13:41.873: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:13:41.873: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:13:41.874: INFO: ExecWithOptions: Clientset creation Jan 9 15:13:41.874: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:13:46.968: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:13:48.973: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:13:48.973: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:13:48.974: INFO: ExecWithOptions: Clientset creation Jan 9 15:13:48.974: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:13:54.069: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:13:56.074: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:13:56.074: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:13:56.075: INFO: ExecWithOptions: Clientset creation Jan 9 15:13:56.075: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:14:01.162: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:14:03.167: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:14:03.167: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:14:03.168: INFO: ExecWithOptions: Clientset creation Jan 9 15:14:03.168: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:14:08.252: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:14:10.258: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:14:10.258: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:14:10.259: INFO: ExecWithOptions: Clientset creation Jan 9 15:14:10.259: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:14:15.346: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:14:17.351: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:14:17.351: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:14:17.352: INFO: ExecWithOptions: Clientset creation Jan 9 15:14:17.352: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:14:22.443: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:14:24.452: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:14:24.452: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:14:24.453: INFO: ExecWithOptions: Clientset creation Jan 9 15:14:24.453: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:14:29.539: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:14:31.544: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:14:31.544: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:14:31.545: INFO: ExecWithOptions: Clientset creation Jan 9 15:14:31.545: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:14:36.643: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:14:38.648: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:14:38.648: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:14:38.648: INFO: ExecWithOptions: Clientset creation Jan 9 15:14:38.648: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:14:43.718: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:14:45.724: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:14:45.724: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:14:45.724: INFO: ExecWithOptions: Clientset creation Jan 9 15:14:45.724: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:14:50.801: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:14:52.805: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:14:52.805: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:14:52.805: INFO: ExecWithOptions: Clientset creation Jan 9 15:14:52.805: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:14:57.888: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:14:59.894: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:14:59.894: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:14:59.895: INFO: ExecWithOptions: Clientset creation Jan 9 15:14:59.895: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:15:04.992: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:15:06.997: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:15:06.997: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:15:06.997: INFO: ExecWithOptions: Clientset creation Jan 9 15:15:06.997: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:15:12.093: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:15:14.100: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:15:14.100: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:15:14.101: INFO: ExecWithOptions: Clientset creation Jan 9 15:15:14.102: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:15:19.241: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:15:21.247: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:15:21.247: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:15:21.248: INFO: ExecWithOptions: Clientset creation Jan 9 15:15:21.248: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:15:26.327: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:15:28.333: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:15:28.333: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:15:28.333: INFO: ExecWithOptions: Clientset creation Jan 9 15:15:28.333: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:15:33.416: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:15:35.423: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1'] Namespace:pod-network-test-1438 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:15:35.423: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:15:35.424: INFO: ExecWithOptions: Clientset creation Jan 9 15:15:35.424: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1438/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.0.17%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dhttp%26host%3D192.168.2.23%26port%3D8083%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Jan 9 15:15:40.508: INFO: Waiting for responses: map[netserver-3:{}] Jan 9 15:15:42.508: INFO: Output of kubectl describe pod pod-network-test-1438/netserver-0: Jan 9 15:15:42.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-1438 describe pod netserver-0 --namespace=pod-network-test-1438' Jan 9 15:15:42.824: INFO: stderr: "" Jan 9 15:15:42.824: INFO: stdout: "Name: netserver-0\nNamespace: pod-network-test-1438\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv/172.18.0.4\nStart Time: Mon, 09 Jan 2023 15:09:41 +0000\nLabels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.0.16\nIPs:\n IP: 192.168.0.16\nContainers:\n webserver:\n Container ID: containerd://2d80836850723478af5ea5f0967597306a94141c9c36b864bf629267762d4f0a\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:09:43 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r7gmf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-r7gmf:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6m1s default-scheduler Successfully assigned pod-network-test-1438/netserver-0 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv\n Normal Pulled 6m kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 6m kubelet Created container webserver\n Normal Started 5m59s kubelet Started container webserver\n" Jan 9 15:15:42.824: INFO: Name: netserver-0 Namespace: pod-network-test-1438 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv/172.18.0.4 Start Time: Mon, 09 Jan 2023 15:09:41 +0000 Labels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true Annotations: <none> Status: Running IP: 192.168.0.16 IPs: IP: 192.168.0.16 Containers: webserver: Container ID: containerd://2d80836850723478af5ea5f0967597306a94141c9c36b864bf629267762d4f0a Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:09:43 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r7gmf (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-r7gmf: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m1s default-scheduler Successfully assigned pod-network-test-1438/netserver-0 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv Normal Pulled 6m kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 6m kubelet Created container webserver Normal Started 5m59s kubelet Started container webserver Jan 9 15:15:42.824: INFO: Output of kubectl describe pod pod-network-test-1438/netserver-1: Jan 9 15:15:42.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-1438 describe pod netserver-1 --namespace=pod-network-test-1438' Jan 9 15:15:42.916: INFO: stderr: "" Jan 9 15:15:42.916: INFO: stdout: "Name: netserver-1\nNamespace: pod-network-test-1438\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv/172.18.0.6\nStart Time: Mon, 09 Jan 2023 15:09:41 +0000\nLabels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.1.26\nIPs:\n IP: 192.168.1.26\nContainers:\n webserver:\n Container ID: containerd://5f8ce787ac5ac166716257885d465e731a659d3e5891734cc269f98d1da5741e\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:09:43 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvrd5 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-qvrd5:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6m1s default-scheduler Successfully assigned pod-network-test-1438/netserver-1 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv\n Normal Pulled 6m kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 6m kubelet Created container webserver\n Normal Started 5m59s kubelet Started container webserver\n" Jan 9 15:15:42.917: INFO: Name: netserver-1 Namespace: pod-network-test-1438 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv/172.18.0.6 Start Time: Mon, 09 Jan 2023 15:09:41 +0000 Labels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true Annotations: <none> Status: Running IP: 192.168.1.26 IPs: IP: 192.168.1.26 Containers: webserver: Container ID: containerd://5f8ce787ac5ac166716257885d465e731a659d3e5891734cc269f98d1da5741e Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:09:43 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qvrd5 (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-qvrd5: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m1s default-scheduler Successfully assigned pod-network-test-1438/netserver-1 to k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv Normal Pulled 6m kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 6m kubelet Created container webserver Normal Started 5m59s kubelet Started container webserver Jan 9 15:15:42.917: INFO: Output of kubectl describe pod pod-network-test-1438/netserver-2: Jan 9 15:15:42.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-1438 describe pod netserver-2 --namespace=pod-network-test-1438' Jan 9 15:15:43.015: INFO: stderr: "" Jan 9 15:15:43.015: INFO: stdout: "Name: netserver-2\nNamespace: pod-network-test-1438\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-worker-1r6syi/172.18.0.5\nStart Time: Mon, 09 Jan 2023 15:09:41 +0000\nLabels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.15\nIPs:\n IP: 192.168.6.15\nContainers:\n webserver:\n Container ID: containerd://f31aa29025fd7440186d2dd5be2dd92e4ddf5167da64a08733a2d8d81046d60d\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:09:43 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wp2hw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-wp2hw:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-1r6syi\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6m1s default-scheduler Successfully assigned pod-network-test-1438/netserver-2 to k8s-upgrade-and-conformance-viu2kk-worker-1r6syi\n Normal Pulled 6m1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 6m1s kubelet Created container webserver\n Normal Started 6m kubelet Started container webserver\n" Jan 9 15:15:43.017: INFO: Name: netserver-2 Namespace: pod-network-test-1438 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-worker-1r6syi/172.18.0.5 Start Time: Mon, 09 Jan 2023 15:09:41 +0000 Labels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true Annotations: <none> Status: Running IP: 192.168.6.15 IPs: IP: 192.168.6.15 Containers: webserver: Container ID: containerd://f31aa29025fd7440186d2dd5be2dd92e4ddf5167da64a08733a2d8d81046d60d Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:09:43 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wp2hw (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-wp2hw: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-1r6syi Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m1s default-scheduler Successfully assigned pod-network-test-1438/netserver-2 to k8s-upgrade-and-conformance-viu2kk-worker-1r6syi Normal Pulled 6m1s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 6m1s kubelet Created container webserver Normal Started 6m kubelet Started container webserver Jan 9 15:15:43.017: INFO: Output of kubectl describe pod pod-network-test-1438/netserver-3: Jan 9 15:15:43.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=pod-network-test-1438 describe pod netserver-3 --namespace=pod-network-test-1438' Jan 9 15:15:43.115: INFO: stderr: "" Jan 9 15:15:43.115: INFO: stdout: "Name: netserver-3\nNamespace: pod-network-test-1438\nPriority: 0\nNode: k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b/172.18.0.7\nStart Time: Mon, 09 Jan 2023 15:09:41 +0000\nLabels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true\nAnnotations: <none>\nStatus: Running\nIP: 192.168.2.23\nIPs:\n IP: 192.168.2.23\nContainers:\n webserver:\n Container ID: containerd://02e7a4545ef6a352576f956fa805ade71eba123b68947fea90b5feb9c27423ac\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Ports: 8083/TCP, 8081/UDP\n Host Ports: 0/TCP, 0/UDP\n Args:\n netexec\n --http-port=8083\n --udp-port=8081\n State: Running\n Started: Mon, 09 Jan 2023 15:09:43 +0000\n Ready: True\n Restart Count: 0\n Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-np2px (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-np2px:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6m1s default-scheduler Successfully assigned pod-network-test-1438/netserver-3 to k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b\n Normal Pulled 6m1s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 6m1s kubelet Created container webserver\n Normal Started 6m kubelet Started container webserver\n" Jan 9 15:15:43.116: INFO: Name: netserver-3 Namespace: pod-network-test-1438 Priority: 0 Node: k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b/172.18.0.7 Start Time: Mon, 09 Jan 2023 15:09:41 +0000 Labels: selector-46d8c5bb-7df4-48f5-b086-014499b9e68f=true Annotations: <none> Status: Running IP: 192.168.2.23 IPs: IP: 192.168.2.23 Containers: webserver: Container ID: containerd://02e7a4545ef6a352576f956fa805ade71eba123b68947fea90b5feb9c27423ac Image: k8s.gcr.io/e2e-test-images/agnhost:2.39 Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e Ports: 8083/TCP, 8081/UDP Host Ports: 0/TCP, 0/UDP Args: netexec --http-port=8083 --udp-port=8081 State: Running Started: Mon, 09 Jan 2023 15:09:43 +0000 Ready: True Restart Count: 0 Liveness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Readiness: http-get http://:8083/healthz delay=10s timeout=30s period=10s #success=1 #failure=3 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-np2px (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: kube-api-access-np2px: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: kubernetes.io/hostname=k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 6m1s default-scheduler Successfully assigned pod-network-test-1438/netserver-3 to k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b Normal Pulled 6m1s kubelet Container image "k8s.gcr.io/e2e-test-images/agnhost:2.39" already present on machine Normal Created 6m1s kubelet Created container webserver Normal Started 6m kubelet Started container webserver Jan 9 15:15:43.116: INFO: encountered error during dial (did not find expected responses... Tries 46 Command curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1' retrieved map[] expected map[netserver-3:{}]) Jan 9 15:15:43.116: INFO: ... Done probing pod [[[ 192.168.2.23 ]]] Jan 9 15:15:43.116: INFO: succeeded at polling 3 out of 4 connections Jan 9 15:15:43.116: INFO: pod polling failure summary: Jan 9 15:15:43.116: INFO: Collected error: did not find expected responses... Tries 46 Command curl -g -q -s 'http://192.168.0.17:9080/dial?request=hostname&protocol=http&host=192.168.2.23&port=8083&tries=1' retrieved map[] expected map[netserver-3:{}] Jan 9 15:15:43.116: FAIL: failed, 1 out of 4 connections failed Full Stack Trace k8s.io/kubernetes/test/e2e/common/network.glob..func1.1.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 +0x46 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000940d00, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:15:43.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-1438" for this suite. �[91m�[1m• Failure [361.475 seconds]�[0m [sig-network] Networking �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23�[0m Granular Checks: Pods �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30�[0m �[91m�[1mshould function for intra-pod communication: http [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:15:43.116: failed, 1 out of 4 connections failed�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:82 �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:15:42.024: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: updating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: patching the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:15:48.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-7684" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":14,"skipped":336,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:15:48.173: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted Jan 9 15:15:54.420: INFO: 80 pods remaining Jan 9 15:15:54.420: INFO: 80 pods has nil DeletionTimestamp Jan 9 15:15:54.420: INFO: Jan 9 15:15:55.318: INFO: 72 pods remaining Jan 9 15:15:55.318: INFO: 70 pods has nil DeletionTimestamp Jan 9 15:15:55.318: INFO: Jan 9 15:15:56.322: INFO: 60 pods remaining Jan 9 15:15:56.323: INFO: 60 pods has nil DeletionTimestamp Jan 9 15:15:56.323: INFO: Jan 9 15:15:57.304: INFO: 40 pods remaining Jan 9 15:15:57.304: INFO: 40 pods has nil DeletionTimestamp Jan 9 15:15:57.304: INFO: Jan 9 15:15:58.362: INFO: 32 pods remaining Jan 9 15:15:58.362: INFO: 29 pods has nil DeletionTimestamp Jan 9 15:15:58.362: INFO: Jan 9 15:15:59.286: INFO: 20 pods remaining Jan 9 15:15:59.286: INFO: 20 pods has nil DeletionTimestamp Jan 9 15:15:59.286: INFO: Jan 9 15:16:00.334: INFO: 0 pods remaining Jan 9 15:16:00.334: INFO: 0 pods has nil DeletionTimestamp Jan 9 15:16:00.334: INFO: �[1mSTEP�[0m: Gathering metrics Jan 9 15:16:01.446: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-viu2kk-9mv29-nxqn7 is Running (Ready = true) Jan 9 15:16:01.713: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:01.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-237" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":15,"skipped":360,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:01.748: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Failed �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Jan 9 15:16:12.988: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:13.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-449" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":362,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:13.024: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: set up a multi version CRD Jan 9 15:16:13.063: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:27.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-9684" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":10,"skipped":194,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:12:26.433: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod liveness-07ea953f-f6cb-4b62-91d6-1e20525c6fe3 in namespace container-probe-4302 Jan 9 15:12:28.474: INFO: Started pod liveness-07ea953f-f6cb-4b62-91d6-1e20525c6fe3 in namespace container-probe-4302 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 9 15:12:28.477: INFO: Initial restart count of pod liveness-07ea953f-f6cb-4b62-91d6-1e20525c6fe3 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:29.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-4302" for this suite. �[32m• [SLOW TEST:243.008 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":194,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:29.498: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting the auto-created API token �[1mSTEP�[0m: reading a file in the container Jan 9 15:16:32.126: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1059 pod-service-account-c3fa8087-1ece-4a8f-873f-a5ca8783883f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' �[1mSTEP�[0m: reading a file in the container Jan 9 15:16:32.362: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1059 pod-service-account-c3fa8087-1ece-4a8f-873f-a5ca8783883f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' �[1mSTEP�[0m: reading a file in the container Jan 9 15:16:32.586: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1059 pod-service-account-c3fa8087-1ece-4a8f-873f-a5ca8783883f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:33.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-1059" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":12,"skipped":205,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":17,"skipped":366,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:27.684: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:16:27.713: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:34.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-9880" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":18,"skipped":366,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:34.605: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:34.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5808" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":19,"skipped":466,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:33.332: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-109f243e-f05b-4ce5-bdf0-e83789957c8a �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 9 15:16:33.425: INFO: Waiting up to 5m0s for pod "pod-configmaps-94d53128-2a32-41f7-8755-caa8f0bd3f58" in namespace "configmap-834" to be "Succeeded or Failed" Jan 9 15:16:33.435: INFO: Pod "pod-configmaps-94d53128-2a32-41f7-8755-caa8f0bd3f58": Phase="Pending", Reason="", readiness=false. Elapsed: 9.336311ms Jan 9 15:16:35.442: INFO: Pod "pod-configmaps-94d53128-2a32-41f7-8755-caa8f0bd3f58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016135483s Jan 9 15:16:37.449: INFO: Pod "pod-configmaps-94d53128-2a32-41f7-8755-caa8f0bd3f58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023926844s �[1mSTEP�[0m: Saw pod success Jan 9 15:16:37.449: INFO: Pod "pod-configmaps-94d53128-2a32-41f7-8755-caa8f0bd3f58" satisfied condition "Succeeded or Failed" Jan 9 15:16:37.455: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv pod pod-configmaps-94d53128-2a32-41f7-8755-caa8f0bd3f58 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:16:37.499: INFO: Waiting for pod pod-configmaps-94d53128-2a32-41f7-8755-caa8f0bd3f58 to disappear Jan 9 15:16:37.507: INFO: Pod pod-configmaps-94d53128-2a32-41f7-8755-caa8f0bd3f58 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:37.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-834" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":257,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:34.725: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test env composition Jan 9 15:16:34.782: INFO: Waiting up to 5m0s for pod "var-expansion-6d039bc5-b710-482b-8d5a-b0ddc60a1b60" in namespace "var-expansion-7929" to be "Succeeded or Failed" Jan 9 15:16:34.787: INFO: Pod "var-expansion-6d039bc5-b710-482b-8d5a-b0ddc60a1b60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.90658ms Jan 9 15:16:36.793: INFO: Pod "var-expansion-6d039bc5-b710-482b-8d5a-b0ddc60a1b60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011105381s Jan 9 15:16:38.801: INFO: Pod "var-expansion-6d039bc5-b710-482b-8d5a-b0ddc60a1b60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019218879s �[1mSTEP�[0m: Saw pod success Jan 9 15:16:38.801: INFO: Pod "var-expansion-6d039bc5-b710-482b-8d5a-b0ddc60a1b60" satisfied condition "Succeeded or Failed" Jan 9 15:16:38.809: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b pod var-expansion-6d039bc5-b710-482b-8d5a-b0ddc60a1b60 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:16:38.848: INFO: Waiting for pod var-expansion-6d039bc5-b710-482b-8d5a-b0ddc60a1b60 to disappear Jan 9 15:16:38.854: INFO: Pod var-expansion-6d039bc5-b710-482b-8d5a-b0ddc60a1b60 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:38.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-7929" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":478,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:37.553: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Jan 9 15:16:37.628: INFO: Waiting up to 5m0s for pod "pod-f502ecbe-0d59-4bee-9d06-e3c498d888e0" in namespace "emptydir-2298" to be "Succeeded or Failed" Jan 9 15:16:37.635: INFO: Pod "pod-f502ecbe-0d59-4bee-9d06-e3c498d888e0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.058874ms Jan 9 15:16:39.642: INFO: Pod "pod-f502ecbe-0d59-4bee-9d06-e3c498d888e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013212771s Jan 9 15:16:41.648: INFO: Pod "pod-f502ecbe-0d59-4bee-9d06-e3c498d888e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019214371s �[1mSTEP�[0m: Saw pod success Jan 9 15:16:41.648: INFO: Pod "pod-f502ecbe-0d59-4bee-9d06-e3c498d888e0" satisfied condition "Succeeded or Failed" Jan 9 15:16:41.650: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv pod pod-f502ecbe-0d59-4bee-9d06-e3c498d888e0 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:16:41.664: INFO: Waiting for pod pod-f502ecbe-0d59-4bee-9d06-e3c498d888e0 to disappear Jan 9 15:16:41.667: INFO: Pod pod-f502ecbe-0d59-4bee-9d06-e3c498d888e0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:41.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2298" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":261,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:41.686: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 9 15:16:42.145: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 9 15:16:45.177: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that should be mutated �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:45.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2088" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2088-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":15,"skipped":268,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:45.544: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should block an eviction until the PDB is updated to allow it [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pdb that targets all three pods in a test replica set �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: First trying to evict a pod which shouldn't be evictable �[1mSTEP�[0m: Waiting for all pods to be running Jan 9 15:16:47.586: INFO: pods: 0 < 3 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Updating the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running �[1mSTEP�[0m: Waiting for the pdb to observed all healthy pods �[1mSTEP�[0m: Patching the pdb to disallow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for all pods to be running �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Deleting the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be deleted �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:16:53.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-2584" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":16,"skipped":326,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:53.730: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:16:53.755: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 9 15:16:56.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3054 --namespace=crd-publish-openapi-3054 create -f -' Jan 9 15:16:56.924: INFO: stderr: "" Jan 9 15:16:56.924: INFO: stdout: "e2e-test-crd-publish-openapi-4922-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 9 15:16:56.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3054 --namespace=crd-publish-openapi-3054 delete e2e-test-crd-publish-openapi-4922-crds test-cr' Jan 9 15:16:57.000: INFO: stderr: "" Jan 9 15:16:57.000: INFO: stdout: "e2e-test-crd-publish-openapi-4922-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 9 15:16:57.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3054 --namespace=crd-publish-openapi-3054 apply -f -' Jan 9 15:16:57.206: INFO: stderr: "" Jan 9 15:16:57.206: INFO: stdout: "e2e-test-crd-publish-openapi-4922-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 9 15:16:57.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3054 --namespace=crd-publish-openapi-3054 delete e2e-test-crd-publish-openapi-4922-crds test-cr' Jan 9 15:16:57.308: INFO: stderr: "" Jan 9 15:16:57.308: INFO: stdout: "e2e-test-crd-publish-openapi-4922-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 9 15:16:57.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3054 explain e2e-test-crd-publish-openapi-4922-crds' Jan 9 15:16:57.488: INFO: stderr: "" Jan 9 15:16:57.488: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-4922-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t<Object>\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:00.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-3054" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":17,"skipped":327,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:16:39.020: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-projected-7gxv �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 9 15:16:39.080: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7gxv" in namespace "subpath-7069" to be "Succeeded or Failed" Jan 9 15:16:39.085: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Pending", Reason="", readiness=false. Elapsed: 5.492913ms Jan 9 15:16:41.093: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Running", Reason="", readiness=true. Elapsed: 2.012982014s Jan 9 15:16:43.098: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Running", Reason="", readiness=true. Elapsed: 4.018364417s Jan 9 15:16:45.102: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Running", Reason="", readiness=true. Elapsed: 6.021764038s Jan 9 15:16:47.107: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Running", Reason="", readiness=true. Elapsed: 8.026525803s Jan 9 15:16:49.112: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Running", Reason="", readiness=true. Elapsed: 10.031967419s Jan 9 15:16:51.117: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Running", Reason="", readiness=true. Elapsed: 12.037293349s Jan 9 15:16:53.122: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Running", Reason="", readiness=true. Elapsed: 14.042294921s Jan 9 15:16:55.127: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Running", Reason="", readiness=true. Elapsed: 16.047089986s Jan 9 15:16:57.132: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Running", Reason="", readiness=true. Elapsed: 18.051922341s Jan 9 15:16:59.137: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Running", Reason="", readiness=true. Elapsed: 20.057300775s Jan 9 15:17:01.141: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Running", Reason="", readiness=false. Elapsed: 22.061290957s Jan 9 15:17:03.146: INFO: Pod "pod-subpath-test-projected-7gxv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.066293978s �[1mSTEP�[0m: Saw pod success Jan 9 15:17:03.146: INFO: Pod "pod-subpath-test-projected-7gxv" satisfied condition "Succeeded or Failed" Jan 9 15:17:03.149: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b pod pod-subpath-test-projected-7gxv container test-container-subpath-projected-7gxv: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:17:03.165: INFO: Waiting for pod pod-subpath-test-projected-7gxv to disappear Jan 9 15:17:03.168: INFO: Pod pod-subpath-test-projected-7gxv no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-projected-7gxv Jan 9 15:17:03.168: INFO: Deleting pod "pod-subpath-test-projected-7gxv" in namespace "subpath-7069" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:03.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-7069" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":21,"skipped":542,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:01.009: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 9 15:17:01.048: INFO: Waiting up to 5m0s for pod "downwardapi-volume-34f6165f-2b3e-464e-ad7e-ff88a47ff8c0" in namespace "downward-api-8635" to be "Succeeded or Failed" Jan 9 15:17:01.056: INFO: Pod "downwardapi-volume-34f6165f-2b3e-464e-ad7e-ff88a47ff8c0": Phase="Pending", Reason="", readiness=false. Elapsed: 7.484855ms Jan 9 15:17:03.060: INFO: Pod "downwardapi-volume-34f6165f-2b3e-464e-ad7e-ff88a47ff8c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01207632s Jan 9 15:17:05.064: INFO: Pod "downwardapi-volume-34f6165f-2b3e-464e-ad7e-ff88a47ff8c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016216714s �[1mSTEP�[0m: Saw pod success Jan 9 15:17:05.064: INFO: Pod "downwardapi-volume-34f6165f-2b3e-464e-ad7e-ff88a47ff8c0" satisfied condition "Succeeded or Failed" Jan 9 15:17:05.068: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv pod downwardapi-volume-34f6165f-2b3e-464e-ad7e-ff88a47ff8c0 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:17:05.086: INFO: Waiting for pod downwardapi-volume-34f6165f-2b3e-464e-ad7e-ff88a47ff8c0 to disappear Jan 9 15:17:05.089: INFO: Pod downwardapi-volume-34f6165f-2b3e-464e-ad7e-ff88a47ff8c0 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:05.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8635" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":340,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:03.189: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 9 15:17:03.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-14baa9d9-4eaf-42fb-a3dc-6e671ce7f0d2" in namespace "projected-1661" to be "Succeeded or Failed" Jan 9 15:17:03.227: INFO: Pod "downwardapi-volume-14baa9d9-4eaf-42fb-a3dc-6e671ce7f0d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.620707ms Jan 9 15:17:05.232: INFO: Pod "downwardapi-volume-14baa9d9-4eaf-42fb-a3dc-6e671ce7f0d2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006886985s Jan 9 15:17:07.236: INFO: Pod "downwardapi-volume-14baa9d9-4eaf-42fb-a3dc-6e671ce7f0d2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011059124s �[1mSTEP�[0m: Saw pod success Jan 9 15:17:07.236: INFO: Pod "downwardapi-volume-14baa9d9-4eaf-42fb-a3dc-6e671ce7f0d2" satisfied condition "Succeeded or Failed" Jan 9 15:17:07.239: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b pod downwardapi-volume-14baa9d9-4eaf-42fb-a3dc-6e671ce7f0d2 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:17:07.257: INFO: Waiting for pod downwardapi-volume-14baa9d9-4eaf-42fb-a3dc-6e671ce7f0d2 to disappear Jan 9 15:17:07.260: INFO: Pod downwardapi-volume-14baa9d9-4eaf-42fb-a3dc-6e671ce7f0d2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:07.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1661" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":545,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:05.108: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 9 15:17:05.142: INFO: Waiting up to 5m0s for pod "downward-api-460a5df6-30ea-49e2-8522-017b0a9f69a3" in namespace "downward-api-51" to be "Succeeded or Failed" Jan 9 15:17:05.145: INFO: Pod "downward-api-460a5df6-30ea-49e2-8522-017b0a9f69a3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.505923ms Jan 9 15:17:07.150: INFO: Pod "downward-api-460a5df6-30ea-49e2-8522-017b0a9f69a3": Phase="Running", Reason="", readiness=true. Elapsed: 2.008030148s Jan 9 15:17:09.156: INFO: Pod "downward-api-460a5df6-30ea-49e2-8522-017b0a9f69a3": Phase="Running", Reason="", readiness=false. Elapsed: 4.014324261s Jan 9 15:17:11.161: INFO: Pod "downward-api-460a5df6-30ea-49e2-8522-017b0a9f69a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019320189s �[1mSTEP�[0m: Saw pod success Jan 9 15:17:11.161: INFO: Pod "downward-api-460a5df6-30ea-49e2-8522-017b0a9f69a3" satisfied condition "Succeeded or Failed" Jan 9 15:17:11.164: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-worker-1r6syi pod downward-api-460a5df6-30ea-49e2-8522-017b0a9f69a3 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:17:11.188: INFO: Waiting for pod downward-api-460a5df6-30ea-49e2-8522-017b0a9f69a3 to disappear Jan 9 15:17:11.193: INFO: Pod downward-api-460a5df6-30ea-49e2-8522-017b0a9f69a3 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:11.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-51" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":346,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:07.292: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount projected service account token [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test service account token: Jan 9 15:17:07.329: INFO: Waiting up to 5m0s for pod "test-pod-32481931-bd01-4cfb-b6c9-16b3f065a695" in namespace "svcaccounts-9782" to be "Succeeded or Failed" Jan 9 15:17:07.332: INFO: Pod "test-pod-32481931-bd01-4cfb-b6c9-16b3f065a695": Phase="Pending", Reason="", readiness=false. Elapsed: 2.993057ms Jan 9 15:17:09.336: INFO: Pod "test-pod-32481931-bd01-4cfb-b6c9-16b3f065a695": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007200511s Jan 9 15:17:11.342: INFO: Pod "test-pod-32481931-bd01-4cfb-b6c9-16b3f065a695": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013748737s �[1mSTEP�[0m: Saw pod success Jan 9 15:17:11.343: INFO: Pod "test-pod-32481931-bd01-4cfb-b6c9-16b3f065a695" satisfied condition "Succeeded or Failed" Jan 9 15:17:11.346: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b pod test-pod-32481931-bd01-4cfb-b6c9-16b3f065a695 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:17:11.366: INFO: Waiting for pod test-pod-32481931-bd01-4cfb-b6c9-16b3f065a695 to disappear Jan 9 15:17:11.370: INFO: Pod test-pod-32481931-bd01-4cfb-b6c9-16b3f065a695 no longer exists [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:11.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-9782" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":-1,"completed":23,"skipped":557,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:11.399: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pod templates Jan 9 15:17:11.429: INFO: created test-podtemplate-1 Jan 9 15:17:11.433: INFO: created test-podtemplate-2 Jan 9 15:17:11.437: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Jan 9 15:17:11.440: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Jan 9 15:17:11.451: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:11.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-8659" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":24,"skipped":571,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:11.486: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-e56a3a0a-bc74-4410-87cd-19838518a8fb �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 9 15:17:11.523: INFO: Waiting up to 5m0s for pod "pod-secrets-46bf0e34-a784-44ce-bb87-0db054e11977" in namespace "secrets-8984" to be "Succeeded or Failed" Jan 9 15:17:11.526: INFO: Pod "pod-secrets-46bf0e34-a784-44ce-bb87-0db054e11977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.791024ms Jan 9 15:17:13.529: INFO: Pod "pod-secrets-46bf0e34-a784-44ce-bb87-0db054e11977": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006637902s Jan 9 15:17:15.534: INFO: Pod "pod-secrets-46bf0e34-a784-44ce-bb87-0db054e11977": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011679429s �[1mSTEP�[0m: Saw pod success Jan 9 15:17:15.534: INFO: Pod "pod-secrets-46bf0e34-a784-44ce-bb87-0db054e11977" satisfied condition "Succeeded or Failed" Jan 9 15:17:15.538: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b pod pod-secrets-46bf0e34-a784-44ce-bb87-0db054e11977 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:17:15.556: INFO: Waiting for pod pod-secrets-46bf0e34-a784-44ce-bb87-0db054e11977 to disappear Jan 9 15:17:15.559: INFO: Pod pod-secrets-46bf0e34-a784-44ce-bb87-0db054e11977 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:15.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8984" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:11.223: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 9 15:17:12.176: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 9 15:17:15.211: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:17:15.215: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:18.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-6372" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":20,"skipped":358,"failed":1,"failures":["[sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":587,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:15.574: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:17:17.642: INFO: Deleting pod "var-expansion-40ac58dd-27d7-402c-81f7-d54688b5a300" in namespace "var-expansion-1008" Jan 9 15:17:17.648: INFO: Wait up to 5m0s for pod "var-expansion-40ac58dd-27d7-402c-81f7-d54688b5a300" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:19.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-1008" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":26,"skipped":587,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:36 [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:19.690: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename sysctl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:65 [It] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod with one valid and two invalid sysctls [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:19.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sysctl-4994" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":-1,"completed":27,"skipped":592,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:19.735: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 9 15:17:19.775: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d6598eaf-a1ac-48ac-8a3d-063b45bfab72" in namespace "projected-6771" to be "Succeeded or Failed" Jan 9 15:17:19.778: INFO: Pod "downwardapi-volume-d6598eaf-a1ac-48ac-8a3d-063b45bfab72": Phase="Pending", Reason="", readiness=false. Elapsed: 3.428642ms Jan 9 15:17:21.783: INFO: Pod "downwardapi-volume-d6598eaf-a1ac-48ac-8a3d-063b45bfab72": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008337012s Jan 9 15:17:23.789: INFO: Pod "downwardapi-volume-d6598eaf-a1ac-48ac-8a3d-063b45bfab72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013667581s �[1mSTEP�[0m: Saw pod success Jan 9 15:17:23.789: INFO: Pod "downwardapi-volume-d6598eaf-a1ac-48ac-8a3d-063b45bfab72" satisfied condition "Succeeded or Failed" Jan 9 15:17:23.793: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-worker-1r6syi pod downwardapi-volume-d6598eaf-a1ac-48ac-8a3d-063b45bfab72 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:17:23.811: INFO: Waiting for pod downwardapi-volume-d6598eaf-a1ac-48ac-8a3d-063b45bfab72 to disappear Jan 9 15:17:23.816: INFO: Pod downwardapi-volume-d6598eaf-a1ac-48ac-8a3d-063b45bfab72 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:23.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6771" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":596,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:23.857: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 9 15:17:23.893: INFO: The status of Pod annotationupdatef8d2f43c-727d-49a5-a73b-df696dccf67c is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:17:25.898: INFO: The status of Pod annotationupdatef8d2f43c-727d-49a5-a73b-df696dccf67c is Running (Ready = true) Jan 9 15:17:26.419: INFO: Successfully updated pod "annotationupdatef8d2f43c-727d-49a5-a73b-df696dccf67c" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:30.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9829" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":618,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:30.592: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:17:30.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-452" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":688,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:30.654: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod test-webserver-2ffcb8b9-4e15-4837-91e7-89faebb94020 in namespace container-probe-7507 Jan 9 15:17:32.704: INFO: Started pod test-webserver-2ffcb8b9-4e15-4837-91e7-89faebb94020 in namespace container-probe-7507 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 9 15:17:32.708: INFO: Initial restart count of pod test-webserver-2ffcb8b9-4e15-4837-91e7-89faebb94020 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:21:33.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-7507" for this suite. �[32m• [SLOW TEST:242.814 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":688,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:21:33.609: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:21:33.665: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-68399d5b-32a9-41aa-b2f7-6e576dd3d0ce" in namespace "security-context-test-4316" to be "Succeeded or Failed" Jan 9 15:21:33.670: INFO: Pod "busybox-privileged-false-68399d5b-32a9-41aa-b2f7-6e576dd3d0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.857854ms Jan 9 15:21:35.676: INFO: Pod "busybox-privileged-false-68399d5b-32a9-41aa-b2f7-6e576dd3d0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010961744s Jan 9 15:21:37.681: INFO: Pod "busybox-privileged-false-68399d5b-32a9-41aa-b2f7-6e576dd3d0ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016092522s Jan 9 15:21:37.681: INFO: Pod "busybox-privileged-false-68399d5b-32a9-41aa-b2f7-6e576dd3d0ce" satisfied condition "Succeeded or Failed" Jan 9 15:21:37.700: INFO: Got logs for pod "busybox-privileged-false-68399d5b-32a9-41aa-b2f7-6e576dd3d0ce": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:21:37.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-4316" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":756,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:21:37.728: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: starting the proxy server Jan 9 15:21:37.754: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7924 proxy -p 0 --disable-filter' �[1mSTEP�[0m: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:21:37.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7924" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":33,"skipped":768,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:21:37.849: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-7130 [It] should list, patch and delete a collection of StatefulSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:21:37.895: INFO: Found 0 stateful pods, waiting for 1 Jan 9 15:21:47.899: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: patching the StatefulSet Jan 9 15:21:47.921: INFO: Found 1 stateful pods, waiting for 2 Jan 9 15:21:57.927: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 9 15:21:57.927: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Listing all StatefulSets �[1mSTEP�[0m: Delete all of the StatefulSets �[1mSTEP�[0m: Verify that StatefulSets have been deleted [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 9 15:21:57.955: INFO: Deleting all statefulset in ns statefulset-7130 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:21:57.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-7130" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":34,"skipped":777,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:21:58.081: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 9 15:21:58.126: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad243bf5-b7de-4d18-b527-956a7c4bb0f2" in namespace "projected-685" to be "Succeeded or Failed" Jan 9 15:21:58.148: INFO: Pod "downwardapi-volume-ad243bf5-b7de-4d18-b527-956a7c4bb0f2": Phase="Pending", Reason="", readiness=false. Elapsed: 22.407077ms Jan 9 15:22:00.154: INFO: Pod "downwardapi-volume-ad243bf5-b7de-4d18-b527-956a7c4bb0f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028399385s Jan 9 15:22:02.159: INFO: Pod "downwardapi-volume-ad243bf5-b7de-4d18-b527-956a7c4bb0f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033498566s �[1mSTEP�[0m: Saw pod success Jan 9 15:22:02.159: INFO: Pod "downwardapi-volume-ad243bf5-b7de-4d18-b527-956a7c4bb0f2" satisfied condition "Succeeded or Failed" Jan 9 15:22:02.162: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-worker-qb9e9b pod downwardapi-volume-ad243bf5-b7de-4d18-b527-956a7c4bb0f2 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:22:02.190: INFO: Waiting for pod downwardapi-volume-ad243bf5-b7de-4d18-b527-956a7c4bb0f2 to disappear Jan 9 15:22:02.193: INFO: Pod downwardapi-volume-ad243bf5-b7de-4d18-b527-956a7c4bb0f2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:22:02.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-685" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":813,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:22:02.241: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Jan 9 15:22:02.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3883 create -f -' Jan 9 15:22:03.256: INFO: stderr: "" Jan 9 15:22:03.256: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 9 15:22:04.260: INFO: Selector matched 1 pods for map[app:agnhost] Jan 9 15:22:04.260: INFO: Found 1 / 1 Jan 9 15:22:04.260: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 �[1mSTEP�[0m: patching all pods Jan 9 15:22:04.264: INFO: Selector matched 1 pods for map[app:agnhost] Jan 9 15:22:04.264: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 9 15:22:04.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3883 patch pod agnhost-primary-dwsl8 -p {"metadata":{"annotations":{"x":"y"}}}' Jan 9 15:22:04.345: INFO: stderr: "" Jan 9 15:22:04.345: INFO: stdout: "pod/agnhost-primary-dwsl8 patched\n" �[1mSTEP�[0m: checking annotations Jan 9 15:22:04.349: INFO: Selector matched 1 pods for map[app:agnhost] Jan 9 15:22:04.349: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:22:04.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3883" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":36,"skipped":836,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:22:04.414: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 9 15:22:04.463: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:22:06.469: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 9 15:22:06.481: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:22:08.487: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 9 15:22:08.496: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 15:22:08.500: INFO: Pod pod-with-prestop-http-hook still exists Jan 9 15:22:10.501: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 9 15:22:10.506: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:22:10.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-9968" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":872,"failed":6,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:22:10.567: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service nodeport-test with type=NodePort in namespace services-7903 �[1mSTEP�[0m: creating replication controller nodeport-test in namespace services-7903 I0109 15:22:10.626357 18 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-7903, replica count: 2 I0109 15:22:13.679078 18 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 9 15:22:13.679: INFO: Creating new exec pod Jan 9 15:22:16.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:18.851: INFO: rc: 1 Jan 9 15:22:18.852: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:19.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:22.175: INFO: rc: 1 Jan 9 15:22:22.175: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:22.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:25.211: INFO: rc: 1 Jan 9 15:22:25.211: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:25.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:28.161: INFO: rc: 1 Jan 9 15:22:28.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:28.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:31.156: INFO: rc: 1 Jan 9 15:22:31.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:31.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:34.167: INFO: rc: 1 Jan 9 15:22:34.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:34.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:37.158: INFO: rc: 1 Jan 9 15:22:37.159: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:37.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:40.155: INFO: rc: 1 Jan 9 15:22:40.155: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:40.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:43.141: INFO: rc: 1 Jan 9 15:22:43.141: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:43.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:46.165: INFO: rc: 1 Jan 9 15:22:46.165: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:46.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:49.128: INFO: rc: 1 Jan 9 15:22:49.129: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:49.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:52.203: INFO: rc: 1 Jan 9 15:22:52.204: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + nc -v -t -w 2 nodeport-test 80 + echo hostName nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:52.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:55.205: INFO: rc: 1 Jan 9 15:22:55.206: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:55.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:22:58.146: INFO: rc: 1 Jan 9 15:22:58.146: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:22:58.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:01.174: INFO: rc: 1 Jan 9 15:23:01.174: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:01.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:04.193: INFO: rc: 1 Jan 9 15:23:04.193: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:04.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:07.162: INFO: rc: 1 Jan 9 15:23:07.162: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:07.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:10.186: INFO: rc: 1 Jan 9 15:23:10.186: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:10.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:13.192: INFO: rc: 1 Jan 9 15:23:13.192: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:13.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:16.127: INFO: rc: 1 Jan 9 15:23:16.128: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:16.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:19.158: INFO: rc: 1 Jan 9 15:23:19.158: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:19.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:22.181: INFO: rc: 1 Jan 9 15:23:22.181: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:22.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:25.121: INFO: rc: 1 Jan 9 15:23:25.121: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + + echonc hostName -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:25.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:28.117: INFO: rc: 1 Jan 9 15:23:28.117: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:28.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:31.150: INFO: rc: 1 Jan 9 15:23:31.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:31.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:34.152: INFO: rc: 1 Jan 9 15:23:34.152: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:34.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:37.150: INFO: rc: 1 Jan 9 15:23:37.150: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:37.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:40.134: INFO: rc: 1 Jan 9 15:23:40.134: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:40.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:43.212: INFO: rc: 1 Jan 9 15:23:43.212: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:43.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:46.141: INFO: rc: 1 Jan 9 15:23:46.141: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:46.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:49.124: INFO: rc: 1 Jan 9 15:23:49.125: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:49.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:52.156: INFO: rc: 1 Jan 9 15:23:52.156: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:52.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:55.131: INFO: rc: 1 Jan 9 15:23:55.131: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:55.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:23:58.144: INFO: rc: 1 Jan 9 15:23:58.144: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:23:58.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:24:01.167: INFO: rc: 1 Jan 9 15:24:01.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:24:01.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:24:04.137: INFO: rc: 1 Jan 9 15:24:04.137: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:24:04.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:24:07.170: INFO: rc: 1 Jan 9 15:24:07.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:24:07.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:24:10.205: INFO: rc: 1 Jan 9 15:24:10.205: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:24:10.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:24:13.170: INFO: rc: 1 Jan 9 15:24:13.170: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:24:13.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:24:16.167: INFO: rc: 1 Jan 9 15:24:16.167: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:24:16.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:24:19.157: INFO: rc: 1 Jan 9 15:24:19.157: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:24:19.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:24:21.460: INFO: rc: 1 Jan 9 15:24:21.460: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7903 exec execpodzf8f5 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 nodeport-test 80 nc: connect to nodeport-test port 80 (tcp) timed out: Operation in progress command terminated with exit code 1 error: exit status 1 Retrying... Jan 9 15:24:21.461: FAIL: Unexpected error: <*errors.errorString | 0xc003b4c6e0>: { s: "service is not reachable within 2m0s timeout on endpoint nodeport-test:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint nodeport-test:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.glob..func24.11() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1193 +0x145 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000277d40, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:24:21.462: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7903" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [130.934 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould be able to create a functioning NodePort service [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:24:21.461: Unexpected error: <*errors.errorString | 0xc003b4c6e0>: { s: "service is not reachable within 2m0s timeout on endpoint nodeport-test:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint nodeport-test:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:1193 �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:17:18.488: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating replication controller my-hostname-basic-e07b523b-68e7-44b3-a3ff-3b1d7e7b6d93 Jan 9 15:17:18.604: INFO: Pod name my-hostname-basic-e07b523b-68e7-44b3-a3ff-3b1d7e7b6d93: Found 1 pods out of 1 Jan 9 15:17:18.604: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-e07b523b-68e7-44b3-a3ff-3b1d7e7b6d93" are running Jan 9 15:17:20.617: INFO: Pod "my-hostname-basic-e07b523b-68e7-44b3-a3ff-3b1d7e7b6d93-zrkpr" is running (conditions: []) Jan 9 15:17:20.617: INFO: Trying to dial the pod Jan 9 15:20:58.829: INFO: Controller my-hostname-basic-e07b523b-68e7-44b3-a3ff-3b1d7e7b6d93: Failed to GET from replica 1 [my-hostname-basic-e07b523b-68e7-44b3-a3ff-3b1d7e7b6d93-zrkpr]: the server is currently unable to handle the request (get pods my-hostname-basic-e07b523b-68e7-44b3-a3ff-3b1d7e7b6d93-zrkpr) pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Jan 9 15:24:31.826: INFO: Controller my-hostname-basic-e07b523b-68e7-44b3-a3ff-3b1d7e7b6d93: Failed to GET from replica 1 [my-hostname-basic-e07b523b-68e7-44b3-a3ff-3b1d7e7b6d93-zrkpr]: the server is currently unable to handle the request (get pods my-hostname-basic-e07b523b-68e7-44b3-a3ff-3b1d7e7b6d93-zrkpr) pod status: v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition(nil), Message:"", Reason:"", NominatedNodeName:"", HostIP:"", PodIP:"", PodIPs:[]v1.PodIP(nil), StartTime:<nil>, InitContainerStatuses:[]v1.ContainerStatus(nil), ContainerStatuses:[]v1.ContainerStatus(nil), QOSClass:"BestEffort", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)} Jan 9 15:24:31.827: FAIL: Did not get expected responses within the timeout period of 120.00 seconds. Full Stack Trace k8s.io/kubernetes/test/e2e/apps.glob..func7.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65 +0x37 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0002321a0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:24:31.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-1856" for this suite. �[91m�[1m• Failure [433.358 seconds]�[0m [sig-apps] ReplicationController �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m �[91m�[1mshould serve a basic image on each replica with a public image [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 9 15:24:31.827: Did not get expected responses within the timeout period of 120.00 seconds.�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:65 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":37,"skipped":883,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:24:21.509: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to create a functioning NodePort service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service nodeport-test with type=NodePort in namespace services-3052 �[1mSTEP�[0m: creating replication controller nodeport-test in namespace services-3052 I0109 15:24:21.650745 18 runners.go:193] Created replication controller with name: nodeport-test, namespace: services-3052, replica count: 2 I0109 15:24:24.701933 18 runners.go:193] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 9 15:24:24.702: INFO: Creating new exec pod Jan 9 15:24:27.744: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3052 exec execpodd6qtv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 nodeport-test 80' Jan 9 15:24:28.233: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Jan 9 15:24:28.233: INFO: stdout: "nodeport-test-lsthn" Jan 9 15:24:28.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3052 exec execpodd6qtv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.140.41.89 80' Jan 9 15:24:28.540: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.140.41.89 80\nConnection to 10.140.41.89 80 port [tcp/http] succeeded!\n" Jan 9 15:24:28.540: INFO: stdout: "" Jan 9 15:24:29.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3052 exec execpodd6qtv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.140.41.89 80' Jan 9 15:24:29.826: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.140.41.89 80\nConnection to 10.140.41.89 80 port [tcp/http] succeeded!\n" Jan 9 15:24:29.826: INFO: stdout: "" Jan 9 15:24:30.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3052 exec execpodd6qtv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.140.41.89 80' Jan 9 15:24:30.921: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.140.41.89 80\nConnection to 10.140.41.89 80 port [tcp/http] succeeded!\n" Jan 9 15:24:30.921: INFO: stdout: "" Jan 9 15:24:31.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3052 exec execpodd6qtv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.140.41.89 80' Jan 9 15:24:31.858: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.140.41.89 80\nConnection to 10.140.41.89 80 port [tcp/http] succeeded!\n" Jan 9 15:24:31.859: INFO: stdout: "nodeport-test-sxw76" Jan 9 15:24:31.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3052 exec execpodd6qtv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 30583' Jan 9 15:24:32.238: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 30583\nConnection to 172.18.0.4 30583 port [tcp/*] succeeded!\n" Jan 9 15:24:32.238: INFO: stdout: "nodeport-test-lsthn" Jan 9 15:24:32.238: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3052 exec execpodd6qtv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 30583' Jan 9 15:24:32.546: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 30583\nConnection to 172.18.0.5 30583 port [tcp/*] succeeded!\n" Jan 9 15:24:32.546: INFO: stdout: "nodeport-test-sxw76" [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:24:32.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3052" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":-1,"completed":38,"skipped":883,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:24:32.584: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-57b25303-33b7-4e59-b641-19412c7689c6 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 9 15:24:32.670: INFO: Waiting up to 5m0s for pod "pod-configmaps-dfef0c76-a16c-4d9a-89a3-d26d07c5b39d" in namespace "configmap-7127" to be "Succeeded or Failed" Jan 9 15:24:32.688: INFO: Pod "pod-configmaps-dfef0c76-a16c-4d9a-89a3-d26d07c5b39d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.595575ms Jan 9 15:24:34.697: INFO: Pod "pod-configmaps-dfef0c76-a16c-4d9a-89a3-d26d07c5b39d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02687803s Jan 9 15:24:36.703: INFO: Pod "pod-configmaps-dfef0c76-a16c-4d9a-89a3-d26d07c5b39d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033001431s �[1mSTEP�[0m: Saw pod success Jan 9 15:24:36.703: INFO: Pod "pod-configmaps-dfef0c76-a16c-4d9a-89a3-d26d07c5b39d" satisfied condition "Succeeded or Failed" Jan 9 15:24:36.709: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv pod pod-configmaps-dfef0c76-a16c-4d9a-89a3-d26d07c5b39d container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:24:36.756: INFO: Waiting for pod pod-configmaps-dfef0c76-a16c-4d9a-89a3-d26d07c5b39d to disappear Jan 9 15:24:36.760: INFO: Pod pod-configmaps-dfef0c76-a16c-4d9a-89a3-d26d07c5b39d no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:24:36.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-7127" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":886,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:24:36.888: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 9 15:24:36.957: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f94de308-0632-4790-a232-a3468b788648" in namespace "downward-api-3224" to be "Succeeded or Failed" Jan 9 15:24:36.964: INFO: Pod "downwardapi-volume-f94de308-0632-4790-a232-a3468b788648": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077002ms Jan 9 15:24:38.968: INFO: Pod "downwardapi-volume-f94de308-0632-4790-a232-a3468b788648": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01066073s Jan 9 15:24:40.980: INFO: Pod "downwardapi-volume-f94de308-0632-4790-a232-a3468b788648": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022423122s �[1mSTEP�[0m: Saw pod success Jan 9 15:24:40.980: INFO: Pod "downwardapi-volume-f94de308-0632-4790-a232-a3468b788648" satisfied condition "Succeeded or Failed" Jan 9 15:24:40.988: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv pod downwardapi-volume-f94de308-0632-4790-a232-a3468b788648 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:24:41.020: INFO: Waiting for pod downwardapi-volume-f94de308-0632-4790-a232-a3468b788648 to disappear Jan 9 15:24:41.038: INFO: Pod downwardapi-volume-f94de308-0632-4790-a232-a3468b788648 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:24:41.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3224" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":926,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:24:41.071: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 9 15:24:41.114: INFO: Waiting up to 5m0s for pod "downwardapi-volume-055bd16a-a165-430e-90f6-a2647d179e4f" in namespace "projected-4879" to be "Succeeded or Failed" Jan 9 15:24:41.119: INFO: Pod "downwardapi-volume-055bd16a-a165-430e-90f6-a2647d179e4f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.091722ms Jan 9 15:24:43.126: INFO: Pod "downwardapi-volume-055bd16a-a165-430e-90f6-a2647d179e4f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011910197s Jan 9 15:24:45.135: INFO: Pod "downwardapi-volume-055bd16a-a165-430e-90f6-a2647d179e4f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020682485s �[1mSTEP�[0m: Saw pod success Jan 9 15:24:45.135: INFO: Pod "downwardapi-volume-055bd16a-a165-430e-90f6-a2647d179e4f" satisfied condition "Succeeded or Failed" Jan 9 15:24:45.146: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-t8jmv pod downwardapi-volume-055bd16a-a165-430e-90f6-a2647d179e4f container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:24:45.185: INFO: Waiting for pod downwardapi-volume-055bd16a-a165-430e-90f6-a2647d179e4f to disappear Jan 9 15:24:45.192: INFO: Pod downwardapi-volume-055bd16a-a165-430e-90f6-a2647d179e4f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:24:45.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4879" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":927,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:24:45.227: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create and stop a working application [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating all guestbook components Jan 9 15:24:45.275: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-replica labels: app: agnhost role: replica tier: backend spec: ports: - port: 6379 selector: app: agnhost role: replica tier: backend Jan 9 15:24:45.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 create -f -' Jan 9 15:24:45.930: INFO: stderr: "" Jan 9 15:24:45.930: INFO: stdout: "service/agnhost-replica created\n" Jan 9 15:24:45.930: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-primary labels: app: agnhost role: primary tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: primary tier: backend Jan 9 15:24:45.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 create -f -' Jan 9 15:24:46.599: INFO: stderr: "" Jan 9 15:24:46.599: INFO: stdout: "service/agnhost-primary created\n" Jan 9 15:24:46.599: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 9 15:24:46.599: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 create -f -' Jan 9 15:24:48.440: INFO: stderr: "" Jan 9 15:24:48.440: INFO: stdout: "service/frontend created\n" Jan 9 15:24:48.440: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 Jan 9 15:24:48.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 create -f -' Jan 9 15:24:48.874: INFO: stderr: "" Jan 9 15:24:48.874: INFO: stdout: "deployment.apps/frontend created\n" Jan 9 15:24:48.874: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-primary spec: replicas: 1 selector: matchLabels: app: agnhost role: primary tier: backend template: metadata: labels: app: agnhost role: primary tier: backend spec: containers: - name: primary image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 9 15:24:48.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 create -f -' Jan 9 15:24:49.300: INFO: stderr: "" Jan 9 15:24:49.300: INFO: stdout: "deployment.apps/agnhost-primary created\n" Jan 9 15:24:49.300: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-replica spec: replicas: 2 selector: matchLabels: app: agnhost role: replica tier: backend template: metadata: labels: app: agnhost role: replica tier: backend spec: containers: - name: replica image: k8s.gcr.io/e2e-test-images/agnhost:2.39 args: [ "guestbook", "--replicaof", "agnhost-primary", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 9 15:24:49.301: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 create -f -' Jan 9 15:24:50.011: INFO: stderr: "" Jan 9 15:24:50.011: INFO: stdout: "deployment.apps/agnhost-replica created\n" �[1mSTEP�[0m: validating guestbook app Jan 9 15:24:50.011: INFO: Waiting for all frontend pods to be Running. Jan 9 15:24:55.063: INFO: Waiting for frontend to serve content. Jan 9 15:24:55.078: INFO: Trying to add a new entry to the guestbook. Jan 9 15:25:00.090: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 9 15:25:05.109: INFO: Verifying that added entry can be retrieved. Jan 9 15:25:05.121: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""} Jan 9 15:25:15.139: INFO: Failed to get response from guestbook. err: the server responded with the status code 417 but did not return more information (get services frontend), response: Jan 9 15:25:20.159: INFO: Failed to get response from guestbook. err: <nil>, response: {"data":""} �[1mSTEP�[0m: using delete to clean up resources Jan 9 15:25:25.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 delete --grace-period=0 --force -f -' Jan 9 15:25:25.446: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 15:25:25.446: INFO: stdout: "service \"agnhost-replica\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 9 15:25:25.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 delete --grace-period=0 --force -f -' Jan 9 15:25:25.764: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 15:25:25.764: INFO: stdout: "service \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 9 15:25:25.765: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 delete --grace-period=0 --force -f -' Jan 9 15:25:25.963: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 15:25:25.963: INFO: stdout: "service \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 9 15:25:25.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 delete --grace-period=0 --force -f -' Jan 9 15:25:26.143: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 15:25:26.143: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 9 15:25:26.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 delete --grace-period=0 --force -f -' Jan 9 15:25:26.392: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 15:25:26.392: INFO: stdout: "deployment.apps \"agnhost-primary\" force deleted\n" �[1mSTEP�[0m: using delete to clean up resources Jan 9 15:25:26.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6605 delete --grace-period=0 --force -f -' Jan 9 15:25:26.704: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 9 15:25:26.705: INFO: stdout: "deployment.apps \"agnhost-replica\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:25:26.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-6605" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":-1,"completed":42,"skipped":933,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:25:26.932: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename hostport �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Jan 9 15:25:27.135: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:25:29.142: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.18.0.7 on the node which pod1 resides and expect scheduled Jan 9 15:25:29.155: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:25:31.162: INFO: The status of Pod pod2 is Running (Ready = false) Jan 9 15:25:33.163: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.18.0.7 but use UDP protocol on the node which pod2 resides Jan 9 15:25:33.177: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:25:35.184: INFO: The status of Pod pod3 is Running (Ready = true) Jan 9 15:25:35.201: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Jan 9 15:25:37.208: INFO: The status of Pod e2e-host-exec is Running (Ready = true) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Jan 9 15:25:37.212: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.7 http://127.0.0.1:54323/hostname] Namespace:hostport-6499 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:25:37.212: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:25:37.213: INFO: ExecWithOptions: Clientset creation Jan 9 15:25:37.213: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-6499/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.18.0.7+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54323 Jan 9 15:25:37.341: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.7:54323/hostname] Namespace:hostport-6499 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:25:37.341: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:25:37.342: INFO: ExecWithOptions: Clientset creation Jan 9 15:25:37.342: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-6499/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F172.18.0.7%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.7, port: 54323 UDP Jan 9 15:25:37.494: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.7 54323] Namespace:hostport-6499 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 9 15:25:37.494: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 9 15:25:37.495: INFO: ExecWithOptions: Clientset creation Jan 9 15:25:37.496: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-6499/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+172.18.0.7+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:25:42.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "hostport-6499" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":43,"skipped":959,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:25:42.667: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2083 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2083;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2083 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2083;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2083.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2083.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2083.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2083.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2083.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2083.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2083.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2083.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2083.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2083.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2083.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2083.svc;check="$$(dig +notcp +noall +answer +search 242.46.136.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.136.46.242_udp@PTR;check="$$(dig +tcp +noall +answer +search 242.46.136.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.136.46.242_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2083 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2083;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2083 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2083;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2083.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2083.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2083.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2083.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2083.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2083.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2083.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2083.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2083.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2083.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2083.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2083.svc;check="$$(dig +notcp +noall +answer +search 242.46.136.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.136.46.242_udp@PTR;check="$$(dig +tcp +noall +answer +search 242.46.136.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.136.46.242_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 9 15:25:44.852: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:44.859: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:44.868: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:44.874: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:44.884: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:44.891: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:44.944: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:44.951: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:44.960: INFO: Unable to read jessie_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:44.968: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:44.975: INFO: Unable to read jessie_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:44.983: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:45.028: INFO: Lookups using dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2083 wheezy_tcp@dns-test-service.dns-2083 wheezy_udp@dns-test-service.dns-2083.svc wheezy_tcp@dns-test-service.dns-2083.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2083 jessie_tcp@dns-test-service.dns-2083 jessie_udp@dns-test-service.dns-2083.svc jessie_tcp@dns-test-service.dns-2083.svc] Jan 9 15:25:50.042: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.051: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.062: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.074: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.083: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.092: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.148: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.154: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.163: INFO: Unable to read jessie_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.170: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.179: INFO: Unable to read jessie_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.188: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:50.239: INFO: Lookups using dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2083 wheezy_tcp@dns-test-service.dns-2083 wheezy_udp@dns-test-service.dns-2083.svc wheezy_tcp@dns-test-service.dns-2083.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2083 jessie_tcp@dns-test-service.dns-2083 jessie_udp@dns-test-service.dns-2083.svc jessie_tcp@dns-test-service.dns-2083.svc] Jan 9 15:25:55.036: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.048: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.054: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.061: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.067: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.074: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.115: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.121: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.126: INFO: Unable to read jessie_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.134: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.141: INFO: Unable to read jessie_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.148: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:25:55.188: INFO: Lookups using dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2083 wheezy_tcp@dns-test-service.dns-2083 wheezy_udp@dns-test-service.dns-2083.svc wheezy_tcp@dns-test-service.dns-2083.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2083 jessie_tcp@dns-test-service.dns-2083 jessie_udp@dns-test-service.dns-2083.svc jessie_tcp@dns-test-service.dns-2083.svc] Jan 9 15:26:00.039: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.048: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.059: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.072: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.080: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.086: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.142: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.148: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.156: INFO: Unable to read jessie_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.165: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.172: INFO: Unable to read jessie_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.178: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:00.234: INFO: Lookups using dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2083 wheezy_tcp@dns-test-service.dns-2083 wheezy_udp@dns-test-service.dns-2083.svc wheezy_tcp@dns-test-service.dns-2083.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2083 jessie_tcp@dns-test-service.dns-2083 jessie_udp@dns-test-service.dns-2083.svc jessie_tcp@dns-test-service.dns-2083.svc] Jan 9 15:26:05.040: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.048: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.056: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.062: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.069: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.077: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.128: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.134: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.141: INFO: Unable to read jessie_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.148: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.164: INFO: Unable to read jessie_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.173: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:05.211: INFO: Lookups using dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2083 wheezy_tcp@dns-test-service.dns-2083 wheezy_udp@dns-test-service.dns-2083.svc wheezy_tcp@dns-test-service.dns-2083.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2083 jessie_tcp@dns-test-service.dns-2083 jessie_udp@dns-test-service.dns-2083.svc jessie_tcp@dns-test-service.dns-2083.svc] Jan 9 15:26:10.042: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.049: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.055: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.063: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.071: INFO: Unable to read wheezy_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.076: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.145: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.153: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.160: INFO: Unable to read jessie_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.167: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.172: INFO: Unable to read jessie_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.183: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:10.227: INFO: Lookups using dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2083 wheezy_tcp@dns-test-service.dns-2083 wheezy_udp@dns-test-service.dns-2083.svc wheezy_tcp@dns-test-service.dns-2083.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2083 jessie_tcp@dns-test-service.dns-2083 jessie_udp@dns-test-service.dns-2083.svc jessie_tcp@dns-test-service.dns-2083.svc] Jan 9 15:26:15.109: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:15.115: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:15.121: INFO: Unable to read jessie_udp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:15.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083 from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:15.134: INFO: Unable to read jessie_udp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:15.141: INFO: Unable to read jessie_tcp@dns-test-service.dns-2083.svc from pod dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5: the server could not find the requested resource (get pods dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5) Jan 9 15:26:15.183: INFO: Lookups using dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5 failed for: [jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2083 jessie_tcp@dns-test-service.dns-2083 jessie_udp@dns-test-service.dns-2083.svc jessie_tcp@dns-test-service.dns-2083.svc] Jan 9 15:26:20.206: INFO: DNS probes using dns-2083/dns-test-2c4131f8-a011-49bc-b5fa-5f236e6a56e5 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:26:20.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-2083" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":44,"skipped":964,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:26:20.708: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:26:20.817: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 9 15:26:24.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3051 --namespace=crd-publish-openapi-3051 create -f -' Jan 9 15:26:25.686: INFO: stderr: "" Jan 9 15:26:25.686: INFO: stdout: "e2e-test-crd-publish-openapi-2016-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 9 15:26:25.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3051 --namespace=crd-publish-openapi-3051 delete e2e-test-crd-publish-openapi-2016-crds test-cr' Jan 9 15:26:25.879: INFO: stderr: "" Jan 9 15:26:25.879: INFO: stdout: "e2e-test-crd-publish-openapi-2016-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 9 15:26:25.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3051 --namespace=crd-publish-openapi-3051 apply -f -' Jan 9 15:26:26.405: INFO: stderr: "" Jan 9 15:26:26.405: INFO: stdout: "e2e-test-crd-publish-openapi-2016-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 9 15:26:26.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3051 --namespace=crd-publish-openapi-3051 delete e2e-test-crd-publish-openapi-2016-crds test-cr' Jan 9 15:26:26.555: INFO: stderr: "" Jan 9 15:26:26.555: INFO: stdout: "e2e-test-crd-publish-openapi-2016-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 9 15:26:26.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3051 explain e2e-test-crd-publish-openapi-2016-crds' Jan 9 15:26:26.956: INFO: stderr: "" Jan 9 15:26:26.956: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2016-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:26:30.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-3051" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":45,"skipped":991,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:26:30.229: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslicemirroring �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: mirroring a new custom Endpoint Jan 9 15:26:30.322: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 �[1mSTEP�[0m: mirroring an update to a custom Endpoint Jan 9 15:26:32.339: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 �[1mSTEP�[0m: mirroring deletion of a custom Endpoint Jan 9 15:26:34.371: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:26:36.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslicemirroring-5713" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":46,"skipped":1045,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:26:36.451: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 9 15:26:36.501: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Jan 9 15:26:38.564: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:26:39.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-4965" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":47,"skipped":1064,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:26:39.738: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-bc856cbc-f8ef-48de-a19f-16cd82283543 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 9 15:26:39.804: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c5120ee8-26da-47d4-95a7-aba80333f880" in namespace "projected-3271" to be "Succeeded or Failed" Jan 9 15:26:39.812: INFO: Pod "pod-projected-configmaps-c5120ee8-26da-47d4-95a7-aba80333f880": Phase="Pending", Reason="", readiness=false. Elapsed: 8.313469ms Jan 9 15:26:41.824: INFO: Pod "pod-projected-configmaps-c5120ee8-26da-47d4-95a7-aba80333f880": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019662321s Jan 9 15:26:43.830: INFO: Pod "pod-projected-configmaps-c5120ee8-26da-47d4-95a7-aba80333f880": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026434517s �[1mSTEP�[0m: Saw pod success Jan 9 15:26:43.830: INFO: Pod "pod-projected-configmaps-c5120ee8-26da-47d4-95a7-aba80333f880" satisfied condition "Succeeded or Failed" Jan 9 15:26:43.835: INFO: Trying to get logs from node k8s-upgrade-and-conformance-viu2kk-md-0-c5nrs-77b68b5644-2bgsv pod pod-projected-configmaps-c5120ee8-26da-47d4-95a7-aba80333f880 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 9 15:26:43.885: INFO: Waiting for pod pod-projected-configmaps-c5120ee8-26da-47d4-95a7-aba80333f880 to disappear Jan 9 15:26:43.892: INFO: Pod pod-projected-configmaps-c5120ee8-26da-47d4-95a7-aba80333f880 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:26:43.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3271" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":1111,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:26:43.952: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 9 15:26:44.927: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 9 15:26:47.967: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod �[1mSTEP�[0m: 'kubectl attach' the pod, should be denied by the webhook Jan 9 15:26:50.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=webhook-5019 attach --namespace=webhook-5019 to-be-attached-pod -i -c=container1' Jan 9 15:26:50.224: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:26:50.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5019" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5019-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":49,"skipped":1127,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:26:50.492: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 9 15:26:50.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-890" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":50,"skipped":1151,"failed":7,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","[sig-network] Services should be able to create a functioning NodePort service [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 9 15:26:50.758: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 �[1mSTEP�[0m: creating an pod Jan 9 15:26:50.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5379 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.39 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 9 15:26:51.043: INFO: stderr: "" Jan 9 15:26:51.043: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for log generator to start. Jan 9 15:26:51.043: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 9 15:26:51.044: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-5379" to be "running and ready, or succeeded" Jan 9 15:26:51.054: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 10.809914ms Jan 9 15:26:53.064: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.020589745s Jan 9 15:26:53.064: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 9 15:26:53.065: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] �[1mSTEP�[0m: checking for a matching strings Jan 9 15:26:53.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5379 logs logs-generator logs-generator' Jan 9 15:26:53.299: INFO: stderr: "" Jan 9 15:26:53.299: INFO: stdout: "I0109 15:26:52.086556 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/hh8b 563\nI0109 15:26:52.287116 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/nv8 381\nI0109 15:26:52.486837 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/67c 215\nI0109 15:26:52.686687 1 logs_generator.go:76] 3 POST /api/v1/namespaces/default/pods/drw 567\nI0109 15:26:52.887746 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/2q7 299\nI0109 15:26:53.087603 1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/lpt2 453\nI0109 15:26:53.287115 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/x8r 414\n" �[1mSTEP�[0m: limiting log lines Jan 9 15:26:53.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5379 logs logs-generator logs-generator --tail=1' Jan 9 15:26:53.480: INFO: stderr: "" Jan 9 15:26:53.480: INFO: stdout: "I0109 15:26:53.287115 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/x8r 414\n" Jan 9 15:26:53.481: INFO: got output "I0109 15:26