Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 2h1m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\susing\sClusterClass\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc001889bd8>: { error: <*errors.withMessage | 0xc0016f6620>{ cause: <*errors.errorString | 0xc0023b1610>{ s: "error container run failed with exit code 137", }, msg: "Unable to run conformance tests", }, stack: [0x1a97f78, 0x1adc389, 0x7b9691, 0x7b9085, 0x7b875b, 0x7be4c9, 0x7bdeb2, 0x7def91, 0x7decb6, 0x7de305, 0x7e0745, 0x7ec929, 0x7ec73e, 0x1af7c92, 0x523bab, 0x46e1e1], } Unable to run conformance tests: error container run failed with exit code 137 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-8q69yv INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-8q69yv" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-lxkik8" using the "upgrades-cgroupfs" template (Kubernetes v1.22.17, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-lxkik8 --infrastructure (default) --kubernetes-version v1.22.17 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades-cgroupfs INFO: Applying the cluster template yaml to the cluster clusterclass.cluster.x-k8s.io/quick-start created dockerclustertemplate.infrastructure.cluster.x-k8s.io/quick-start-cluster created kubeadmcontrolplanetemplate.controlplane.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/quick-start-default-worker-machinetemplate created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/quick-start-default-worker-bootstraptemplate created configmap/cni-k8s-upgrade-and-conformance-lxkik8-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-lxkik8-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-lxkik8-mp-0-config created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-lxkik8-mp-0-config-cgroupfs created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-lxkik8 created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-lxkik8-mp-0 created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-lxkik8-dmp-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-8q69yv/k8s-upgrade-and-conformance-lxkik8-cqx82 to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-8q69yv/k8s-upgrade-and-conformance-lxkik8-cqx82 to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Cluster topology INFO: Patching the new Kubernetes version to Cluster topology INFO: Waiting for control-plane machines to have the upgraded Kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.23.15 INFO: Waiting for kube-proxy to have the upgraded Kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-8q69yv/k8s-upgrade-and-conformance-lxkik8-md-0-g974z to be upgraded to v1.23.15 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.23.15 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-8q69yv/k8s-upgrade-and-conformance-lxkik8-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-8q69yv/k8s-upgrade-and-conformance-lxkik8-mp-0 to be upgraded from v1.22.17 to v1.23.15 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.23.15 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true" "-ginkgo.v=true"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1673967766�[0m - Will randomize all specs Will run �[1m7052�[0m specs Running in parallel across �[1m4�[0m nodes Jan 17 15:02:50.451: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 17 15:02:50.453: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 17 15:02:50.470: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 17 15:02:50.513: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 17 15:02:50.513: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 17 15:02:50.513: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 17 15:02:50.521: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jan 17 15:02:50.521: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 17 15:02:50.521: INFO: e2e test version: v1.23.15 Jan 17 15:02:50.523: INFO: kube-apiserver version: v1.23.15 Jan 17 15:02:50.523: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 17 15:02:50.529: INFO: Cluster IP family: ipv4 Jan 17 15:02:50.551: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 17 15:02:50.571: INFO: Cluster IP family: ipv4 Jan 17 15:02:50.570: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 17 15:02:50.590: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Jan 17 15:02:50.606: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 17 15:02:50.634: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:02:50.535: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets W0117 15:02:50.578661 18 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 17 15:02:50.582: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-c778ad3a-4500-4e14-b5f3-79f191edff5d �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 17 15:02:50.620: INFO: Waiting up to 5m0s for pod "pod-secrets-f166c693-8700-4e6d-ae30-73a65f0bcd3b" in namespace "secrets-3640" to be "Succeeded or Failed" Jan 17 15:02:50.631: INFO: Pod "pod-secrets-f166c693-8700-4e6d-ae30-73a65f0bcd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.259304ms Jan 17 15:02:52.648: INFO: Pod "pod-secrets-f166c693-8700-4e6d-ae30-73a65f0bcd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027944189s Jan 17 15:02:54.660: INFO: Pod "pod-secrets-f166c693-8700-4e6d-ae30-73a65f0bcd3b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039554267s Jan 17 15:02:56.664: INFO: Pod "pod-secrets-f166c693-8700-4e6d-ae30-73a65f0bcd3b": Phase="Running", Reason="", readiness=false. Elapsed: 6.044051882s Jan 17 15:02:58.670: INFO: Pod "pod-secrets-f166c693-8700-4e6d-ae30-73a65f0bcd3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0496562s �[1mSTEP�[0m: Saw pod success Jan 17 15:02:58.670: INFO: Pod "pod-secrets-f166c693-8700-4e6d-ae30-73a65f0bcd3b" satisfied condition "Succeeded or Failed" Jan 17 15:02:58.673: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod pod-secrets-f166c693-8700-4e6d-ae30-73a65f0bcd3b container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:02:58.699: INFO: Waiting for pod pod-secrets-f166c693-8700-4e6d-ae30-73a65f0bcd3b to disappear Jan 17 15:02:58.702: INFO: Pod pod-secrets-f166c693-8700-4e6d-ae30-73a65f0bcd3b no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:02:58.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-3640" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":1,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:02:50.577: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected W0117 15:02:50.602101 14 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 17 15:02:50.602: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-3bca29a3-947e-4e69-914d-dc8e68de09fe �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 17 15:02:50.648: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d879ebfb-e180-4260-9ea7-6c9fbf141fc7" in namespace "projected-1873" to be "Succeeded or Failed" Jan 17 15:02:50.667: INFO: Pod "pod-projected-secrets-d879ebfb-e180-4260-9ea7-6c9fbf141fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 19.738557ms Jan 17 15:02:52.674: INFO: Pod "pod-projected-secrets-d879ebfb-e180-4260-9ea7-6c9fbf141fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026179989s Jan 17 15:02:54.748: INFO: Pod "pod-projected-secrets-d879ebfb-e180-4260-9ea7-6c9fbf141fc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100768569s Jan 17 15:02:56.753: INFO: Pod "pod-projected-secrets-d879ebfb-e180-4260-9ea7-6c9fbf141fc7": Phase="Running", Reason="", readiness=false. Elapsed: 6.10571381s Jan 17 15:02:58.759: INFO: Pod "pod-projected-secrets-d879ebfb-e180-4260-9ea7-6c9fbf141fc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.110968139s �[1mSTEP�[0m: Saw pod success Jan 17 15:02:58.759: INFO: Pod "pod-projected-secrets-d879ebfb-e180-4260-9ea7-6c9fbf141fc7" satisfied condition "Succeeded or Failed" Jan 17 15:02:58.762: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-krt1ur pod pod-projected-secrets-d879ebfb-e180-4260-9ea7-6c9fbf141fc7 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:02:58.785: INFO: Waiting for pod pod-projected-secrets-d879ebfb-e180-4260-9ea7-6c9fbf141fc7 to disappear Jan 17 15:02:58.788: INFO: Pod pod-projected-secrets-d879ebfb-e180-4260-9ea7-6c9fbf141fc7 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:02:58.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1873" for this suite. �[32m•�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:02:50.668: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook W0117 15:02:50.695136 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 17 15:02:50.695: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 17 15:02:51.756: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 17 15:02:53.771: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 17, 15, 2, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 2, 51, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 15, 2, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 2, 51, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 17 15:02:55.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 17, 15, 2, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 2, 51, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 15, 2, 51, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 2, 51, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 17 15:02:58.787: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API �[1mSTEP�[0m: create a namespace for the webhook �[1mSTEP�[0m: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:02:58.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8645" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8645-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":1,"skipped":6,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:02:58.803: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Jan 17 15:02:58.837: INFO: Waiting up to 5m0s for pod "pod-13cdbf89-31ca-430b-9af5-d96d860c9f79" in namespace "emptydir-9516" to be "Succeeded or Failed" Jan 17 15:02:58.842: INFO: Pod "pod-13cdbf89-31ca-430b-9af5-d96d860c9f79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.120415ms Jan 17 15:03:00.847: INFO: Pod "pod-13cdbf89-31ca-430b-9af5-d96d860c9f79": Phase="Running", Reason="", readiness=false. Elapsed: 2.009104208s Jan 17 15:03:02.853: INFO: Pod "pod-13cdbf89-31ca-430b-9af5-d96d860c9f79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014762132s �[1mSTEP�[0m: Saw pod success Jan 17 15:03:02.853: INFO: Pod "pod-13cdbf89-31ca-430b-9af5-d96d860c9f79" satisfied condition "Succeeded or Failed" Jan 17 15:03:02.857: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod pod-13cdbf89-31ca-430b-9af5-d96d860c9f79 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:03:02.879: INFO: Waiting for pod pod-13cdbf89-31ca-430b-9af5-d96d860c9f79 to disappear Jan 17 15:03:02.883: INFO: Pod pod-13cdbf89-31ca-430b-9af5-d96d860c9f79 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:02.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-9516" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:02:59.039: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on node default medium Jan 17 15:02:59.084: INFO: Waiting up to 5m0s for pod "pod-ae493134-db8a-4abd-b978-8b59b9995da9" in namespace "emptydir-2408" to be "Succeeded or Failed" Jan 17 15:02:59.088: INFO: Pod "pod-ae493134-db8a-4abd-b978-8b59b9995da9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.871484ms Jan 17 15:03:01.095: INFO: Pod "pod-ae493134-db8a-4abd-b978-8b59b9995da9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010302131s Jan 17 15:03:03.100: INFO: Pod "pod-ae493134-db8a-4abd-b978-8b59b9995da9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015196115s �[1mSTEP�[0m: Saw pod success Jan 17 15:03:03.100: INFO: Pod "pod-ae493134-db8a-4abd-b978-8b59b9995da9" satisfied condition "Succeeded or Failed" Jan 17 15:03:03.103: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-krt1ur pod pod-ae493134-db8a-4abd-b978-8b59b9995da9 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:03:03.120: INFO: Waiting for pod pod-ae493134-db8a-4abd-b978-8b59b9995da9 to disappear Jan 17 15:03:03.124: INFO: Pod pod-ae493134-db8a-4abd-b978-8b59b9995da9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:03.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-2408" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":25,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:03.172: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace �[1mSTEP�[0m: listing events with field selection filtering on source �[1mSTEP�[0m: listing events with field selection filtering on reportingController �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: updating the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:03.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-4990" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":3,"skipped":42,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:03.410: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should delete a collection of services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a collection of services Jan 17 15:03:03.441: INFO: Creating e2e-svc-a-n7b27 Jan 17 15:03:03.453: INFO: Creating e2e-svc-b-ftdtb Jan 17 15:03:03.471: INFO: Creating e2e-svc-c-q4lq7 �[1mSTEP�[0m: deleting service collection Jan 17 15:03:03.587: INFO: Collection of services has been deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:03.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-8883" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":-1,"completed":4,"skipped":82,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":61,"failed":0} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:02.897: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 17 15:03:03.238: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 17 15:03:06.265: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a validating webhook configuration �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Updating a validating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Patching a validating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:07.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4428" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4428-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":-1,"completed":3,"skipped":61,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:07.533: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ServiceAccount �[1mSTEP�[0m: watching for the ServiceAccount to be added �[1mSTEP�[0m: patching the ServiceAccount �[1mSTEP�[0m: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) �[1mSTEP�[0m: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:07.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-5863" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":4,"skipped":93,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:07.740: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Jan 17 15:03:07.958: INFO: namespace kubectl-1818 Jan 17 15:03:07.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1818 create -f -' Jan 17 15:03:08.238: INFO: stderr: "" Jan 17 15:03:08.238: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 17 15:03:09.243: INFO: Selector matched 1 pods for map[app:agnhost] Jan 17 15:03:09.243: INFO: Found 0 / 1 Jan 17 15:03:10.243: INFO: Selector matched 1 pods for map[app:agnhost] Jan 17 15:03:10.243: INFO: Found 1 / 1 Jan 17 15:03:10.243: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 17 15:03:10.246: INFO: Selector matched 1 pods for map[app:agnhost] Jan 17 15:03:10.246: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 17 15:03:10.246: INFO: wait on agnhost-primary startup in kubectl-1818 Jan 17 15:03:10.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1818 logs agnhost-primary-hrdr2 agnhost-primary' Jan 17 15:03:10.328: INFO: stderr: "" Jan 17 15:03:10.328: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Jan 17 15:03:10.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1818 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Jan 17 15:03:10.425: INFO: stderr: "" Jan 17 15:03:10.425: INFO: stdout: "service/rm2 exposed\n" Jan 17 15:03:10.434: INFO: Service rm2 in namespace kubectl-1818 found. �[1mSTEP�[0m: exposing service Jan 17 15:03:12.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1818 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Jan 17 15:03:12.566: INFO: stderr: "" Jan 17 15:03:12.566: INFO: stdout: "service/rm3 exposed\n" Jan 17 15:03:12.576: INFO: Service rm3 in namespace kubectl-1818 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:14.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1818" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":5,"skipped":122,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:14.637: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Pods Set QOS Class /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying QOS class is set on the pod [AfterEach] [sig-node] Pods Extended /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:14.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-1451" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":-1,"completed":6,"skipped":147,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:14.724: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:03:14.758: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-0ba73910-16e3-455b-8bd7-c49f9d440de4" in namespace "security-context-test-1382" to be "Succeeded or Failed" Jan 17 15:03:14.765: INFO: Pod "alpine-nnp-false-0ba73910-16e3-455b-8bd7-c49f9d440de4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.435536ms Jan 17 15:03:16.770: INFO: Pod "alpine-nnp-false-0ba73910-16e3-455b-8bd7-c49f9d440de4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011828404s Jan 17 15:03:18.782: INFO: Pod "alpine-nnp-false-0ba73910-16e3-455b-8bd7-c49f9d440de4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023993748s Jan 17 15:03:18.782: INFO: Pod "alpine-nnp-false-0ba73910-16e3-455b-8bd7-c49f9d440de4" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:18.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-1382" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":162,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:18.829: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Jan 17 15:03:18.892: INFO: Waiting up to 5m0s for pod "pod-a829ab03-b5c0-4e0e-93fb-36c90b1fbabc" in namespace "emptydir-9298" to be "Succeeded or Failed" Jan 17 15:03:18.898: INFO: Pod "pod-a829ab03-b5c0-4e0e-93fb-36c90b1fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.49583ms Jan 17 15:03:20.906: INFO: Pod "pod-a829ab03-b5c0-4e0e-93fb-36c90b1fbabc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014012437s Jan 17 15:03:22.911: INFO: Pod "pod-a829ab03-b5c0-4e0e-93fb-36c90b1fbabc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019270903s �[1mSTEP�[0m: Saw pod success Jan 17 15:03:22.911: INFO: Pod "pod-a829ab03-b5c0-4e0e-93fb-36c90b1fbabc" satisfied condition "Succeeded or Failed" Jan 17 15:03:22.915: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod pod-a829ab03-b5c0-4e0e-93fb-36c90b1fbabc container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:03:22.939: INFO: Waiting for pod pod-a829ab03-b5c0-4e0e-93fb-36c90b1fbabc to disappear Jan 17 15:03:22.942: INFO: Pod pod-a829ab03-b5c0-4e0e-93fb-36c90b1fbabc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:22.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-9298" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":165,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:02:58.841: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-projected-cblf �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 17 15:02:58.893: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-cblf" in namespace "subpath-7169" to be "Succeeded or Failed" Jan 17 15:02:58.899: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 5.334268ms Jan 17 15:03:00.904: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010358543s Jan 17 15:03:02.909: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.015493345s Jan 17 15:03:04.916: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Running", Reason="", readiness=true. Elapsed: 6.023156817s Jan 17 15:03:06.921: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Running", Reason="", readiness=true. Elapsed: 8.028141745s Jan 17 15:03:08.930: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Running", Reason="", readiness=true. Elapsed: 10.036484388s Jan 17 15:03:10.935: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Running", Reason="", readiness=true. Elapsed: 12.041339466s Jan 17 15:03:12.939: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Running", Reason="", readiness=true. Elapsed: 14.045528299s Jan 17 15:03:14.944: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Running", Reason="", readiness=true. Elapsed: 16.05107099s Jan 17 15:03:16.949: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Running", Reason="", readiness=true. Elapsed: 18.055472341s Jan 17 15:03:18.957: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Running", Reason="", readiness=true. Elapsed: 20.063866016s Jan 17 15:03:20.965: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Running", Reason="", readiness=true. Elapsed: 22.072103584s Jan 17 15:03:22.969: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Running", Reason="", readiness=true. Elapsed: 24.076259731s Jan 17 15:03:24.974: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Running", Reason="", readiness=false. Elapsed: 26.08111869s Jan 17 15:03:26.980: INFO: Pod "pod-subpath-test-projected-cblf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.0870299s �[1mSTEP�[0m: Saw pod success Jan 17 15:03:26.980: INFO: Pod "pod-subpath-test-projected-cblf" satisfied condition "Succeeded or Failed" Jan 17 15:03:26.984: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9 pod pod-subpath-test-projected-cblf container test-container-subpath-projected-cblf: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:03:27.020: INFO: Waiting for pod pod-subpath-test-projected-cblf to disappear Jan 17 15:03:27.024: INFO: Pod pod-subpath-test-projected-cblf no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-projected-cblf Jan 17 15:03:27.024: INFO: Deleting pod "pod-subpath-test-projected-cblf" in namespace "subpath-7169" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:27.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-7169" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":2,"skipped":21,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:27.095: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 17 15:03:27.634: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 17 15:03:30.664: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:03:30.672: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-4266-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:33.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3384" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3384-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":3,"skipped":52,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:33.929: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] Deployment should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:03:33.989: INFO: Creating simple deployment test-new-deployment Jan 17 15:03:34.050: INFO: deployment "test-new-deployment" doesn't have the required revision set Jan 17 15:03:36.067: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 17, 15, 3, 34, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 3, 34, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 15, 3, 34, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 3, 34, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 17 15:03:38.076: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 17, 15, 3, 34, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 3, 34, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 15, 3, 34, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 3, 34, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-new-deployment-5d9fdcc779\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the deployment Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 17 15:03:40.100: INFO: Deployment "test-new-deployment": &Deployment{ObjectMeta:{test-new-deployment deployment-3541 0b6649ca-b63d-4ea4-9f30-d9e365671bec 2793 3 2023-01-17 15:03:33 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 <nil> FieldsV1 {"f:spec":{"f:replicas":{}}} scale} {e2e.test Update apps/v1 2023-01-17 15:03:33 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:03:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*4,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001292e18 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-17 15:03:38 +0000 UTC,LastTransitionTime:2023-01-17 15:03:38 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-new-deployment-5d9fdcc779" has successfully progressed.,LastUpdateTime:2023-01-17 15:03:38 +0000 UTC,LastTransitionTime:2023-01-17 15:03:34 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 17 15:03:40.110: INFO: New ReplicaSet "test-new-deployment-5d9fdcc779" of Deployment "test-new-deployment": &ReplicaSet{ObjectMeta:{test-new-deployment-5d9fdcc779 deployment-3541 f010e102-b538-4e8a-9c9a-4c86853f33a6 2798 2 2023-01-17 15:03:34 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-new-deployment 0b6649ca-b63d-4ea4-9f30-d9e365671bec 0xc001293220 0xc001293221}] [] [{kube-controller-manager Update apps/v1 2023-01-17 15:03:34 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0b6649ca-b63d-4ea4-9f30-d9e365671bec\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:03:38 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0012932a8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 17 15:03:40.114: INFO: Pod "test-new-deployment-5d9fdcc779-dwp48" is not available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-dwp48 test-new-deployment-5d9fdcc779- deployment-3541 54001eaa-b718-4058-b8b5-1272dd670b43 2796 0 2023-01-17 15:03:40 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 f010e102-b538-4e8a-9c9a-4c86853f33a6 0xc001293670 0xc001293671}] [] [{kube-controller-manager Update v1 2023-01-17 15:03:40 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f010e102-b538-4e8a-9c9a-4c86853f33a6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qp4r8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qp4r8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:03:40 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:03:40.114: INFO: Pod "test-new-deployment-5d9fdcc779-stflf" is available: &Pod{ObjectMeta:{test-new-deployment-5d9fdcc779-stflf test-new-deployment-5d9fdcc779- deployment-3541 816c7d29-3ce5-46a6-9901-9a01041e4c67 2769 0 2023-01-17 15:03:34 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet test-new-deployment-5d9fdcc779 f010e102-b538-4e8a-9c9a-4c86853f33a6 0xc001293f90 0xc001293f91}] [] [{kube-controller-manager Update v1 2023-01-17 15:03:34 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f010e102-b538-4e8a-9c9a-4c86853f33a6\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:03:38 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.4\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-v5srm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v5srm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:03:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:03:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:03:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:03:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.4,StartTime:2023-01-17 15:03:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:03:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://06d71417d014525445c8dadb760a2f8af78fa43d5361e56c6149f2c1da35c49d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:40.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-3541" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":-1,"completed":4,"skipped":72,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:02:50.697: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop W0117 15:02:50.718233 23 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Jan 17 15:02:50.718: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating server pod server in namespace prestop-6960 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-6960 �[1mSTEP�[0m: Deleting pre-stop pod Jan 17 15:03:03.779: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:08.782: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:13.775: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:18.783: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:23.775: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:28.774: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:33.775: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:38.776: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:43.775: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:48.774: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:53.773: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:58.774: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:58.778: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": null, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } Jan 17 15:03:58.779: FAIL: validating pre-stop. Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/node.testPreStop({0x7b06bd0, 0xc0041bb380}, {0xc003d77420, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151 +0x1125 k8s.io/kubernetes/test/e2e/node.glob..func11.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 +0x31 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000c66b60, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:03:58.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-6960" for this suite. �[91m�[1m• Failure [68.103 seconds]�[0m [sig-node] PreStop �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23�[0m �[91m�[1mshould call prestop when killing a pod [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 17 15:03:58.779: validating pre-stop. Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:151 �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:40.190: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-8hgp �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 17 15:03:40.234: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8hgp" in namespace "subpath-4066" to be "Succeeded or Failed" Jan 17 15:03:40.238: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Pending", Reason="", readiness=false. Elapsed: 3.773244ms Jan 17 15:03:42.243: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Running", Reason="", readiness=true. Elapsed: 2.00875864s Jan 17 15:03:44.249: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Running", Reason="", readiness=true. Elapsed: 4.014807107s Jan 17 15:03:46.255: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Running", Reason="", readiness=true. Elapsed: 6.020436833s Jan 17 15:03:48.259: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Running", Reason="", readiness=true. Elapsed: 8.025134286s Jan 17 15:03:50.265: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Running", Reason="", readiness=true. Elapsed: 10.030856131s Jan 17 15:03:52.269: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Running", Reason="", readiness=true. Elapsed: 12.035305588s Jan 17 15:03:54.275: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Running", Reason="", readiness=true. Elapsed: 14.040889633s Jan 17 15:03:56.281: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Running", Reason="", readiness=true. Elapsed: 16.046351417s Jan 17 15:03:58.284: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Running", Reason="", readiness=true. Elapsed: 18.050258675s Jan 17 15:04:00.288: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Running", Reason="", readiness=true. Elapsed: 20.053999973s Jan 17 15:04:02.294: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Running", Reason="", readiness=false. Elapsed: 22.060177595s Jan 17 15:04:04.301: INFO: Pod "pod-subpath-test-configmap-8hgp": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.066450468s �[1mSTEP�[0m: Saw pod success Jan 17 15:04:04.301: INFO: Pod "pod-subpath-test-configmap-8hgp" satisfied condition "Succeeded or Failed" Jan 17 15:04:04.305: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod pod-subpath-test-configmap-8hgp container test-container-subpath-configmap-8hgp: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:04:04.325: INFO: Waiting for pod pod-subpath-test-configmap-8hgp to disappear Jan 17 15:04:04.329: INFO: Pod pod-subpath-test-configmap-8hgp no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-8hgp Jan 17 15:04:04.329: INFO: Deleting pod "pod-subpath-test-configmap-8hgp" in namespace "subpath-4066" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:04:04.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-4066" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":5,"skipped":110,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":0,"skipped":30,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]"]} [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:58.804: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating server pod server in namespace prestop-4538 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-4538 �[1mSTEP�[0m: Deleting pre-stop pod Jan 17 15:04:07.939: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } �[1mSTEP�[0m: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:04:07.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-4538" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":1,"skipped":30,"failed":1,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:04:04.408: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:04:04.443: INFO: created pod Jan 17 15:04:04.443: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-4697" to be "Succeeded or Failed" Jan 17 15:04:04.447: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.49927ms Jan 17 15:04:06.455: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01226056s Jan 17 15:04:08.460: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016970168s �[1mSTEP�[0m: Saw pod success Jan 17 15:04:08.460: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Jan 17 15:04:38.461: INFO: polling logs Jan 17 15:04:38.473: INFO: Pod logs: I0117 15:04:05.137941 1 log.go:195] OK: Got token I0117 15:04:05.138057 1 log.go:195] validating with in-cluster discovery I0117 15:04:05.138546 1 log.go:195] OK: got issuer https://kubernetes.default.svc.cluster.local I0117 15:04:05.138584 1 log.go:195] Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-4697:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1673968444, NotBefore:1673967844, IssuedAt:1673967844, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-4697", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"28f619a5-52de-4161-b327-546530c7019d"}}} I0117 15:04:05.167373 1 log.go:195] OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local I0117 15:04:05.174740 1 log.go:195] OK: Validated signature on JWT I0117 15:04:05.174855 1 log.go:195] OK: Got valid claims from token! I0117 15:04:05.174882 1 log.go:195] Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-4697:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1673968444, NotBefore:1673967844, IssuedAt:1673967844, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-4697", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"28f619a5-52de-4161-b327-546530c7019d"}}} Jan 17 15:04:38.473: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:04:38.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-4697" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":6,"skipped":145,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:22.976: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod test-webserver-42c4615c-83e2-40d3-a44d-6d0f3d970e04 in namespace container-probe-7778 Jan 17 15:03:25.018: INFO: Started pod test-webserver-42c4615c-83e2-40d3-a44d-6d0f3d970e04 in namespace container-probe-7778 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 17 15:03:25.022: INFO: Initial restart count of pod test-webserver-42c4615c-83e2-40d3-a44d-6d0f3d970e04 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:07:25.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-7778" for this suite. �[32m• [SLOW TEST:242.769 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":183,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:04:08.137: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 17 15:04:08.190: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:10.196: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 17 15:04:10.208: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:12.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:14.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:16.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:18.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:20.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:22.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:24.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:26.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:28.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:30.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:32.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:34.215: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:36.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:38.216: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:40.215: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:42.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:44.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:46.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:48.215: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:50.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:52.212: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:54.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:56.215: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:04:58.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:00.212: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:02.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:04.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:06.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:08.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:10.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:12.215: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:14.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:16.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:18.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:20.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:22.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:24.219: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:26.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:28.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:30.215: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:32.215: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:34.215: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:36.216: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:38.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:40.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:42.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:44.215: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:46.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:48.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:50.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:52.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:54.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:56.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:05:58.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:00.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:02.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:04.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:06.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:08.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:10.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:12.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:14.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:16.215: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:18.214: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:20.213: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:06:22.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:24.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:26.220: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:28.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:30.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:32.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:34.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:36.216: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:38.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:40.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:42.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:44.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:46.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:48.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:50.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:52.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:54.218: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:56.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:06:58.213: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:00.213: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:02.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:04.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:06.213: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:08.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:10.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:12.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:14.217: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:16.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:18.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:20.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:22.216: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:24.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:26.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:28.213: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:30.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:32.213: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:34.216: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:36.213: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:38.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:40.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:42.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:44.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:46.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:48.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:50.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:52.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:54.213: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:56.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:07:58.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:00.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:02.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:04.213: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:06.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:08.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:10.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:12.216: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:14.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:16.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:18.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:20.216: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:22.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:24.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:26.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:28.217: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:30.225: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:32.222: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:34.220: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:36.218: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:38.218: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:40.220: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:42.216: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:44.216: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:46.220: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:48.214: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:50.218: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:52.217: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:54.215: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:56.216: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:08:58.220: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:09:00.216: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:09:02.217: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:09:04.216: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:09:06.217: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:09:08.217: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:09:10.218: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:09:10.226: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = false) Jan 17 15:09:10.227: FAIL: Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/framework.(*PodClient).CreateSync(0xc0049a2bd0, 0x7fec47734f18) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 +0x94 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.2(0xc003e4c800) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:72 +0x73 k8s.io/kubernetes/test/e2e/common/node.glob..func12.1.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:105 +0x32b k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000c66b60, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:09:10.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-1890" for this suite. �[91m�[1m• Failure [302.112 seconds]�[0m [sig-node] Container Lifecycle Hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m when create a pod with lifecycle hook �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:44�[0m �[91m�[1mshould execute poststart exec hook properly [NodeConformance] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 17 15:09:10.227: Unexpected error: <*errors.errorString | 0xc0002482c0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:107 �[90m------------------------------�[0m {"msg":"FAILED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":1,"skipped":137,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:09:10.256: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 17 15:09:10.329: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:09:12.338: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 17 15:09:12.367: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:09:14.376: INFO: The status of Pod pod-with-poststart-exec-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:09:16.374: INFO: The status of Pod pod-with-poststart-exec-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 17 15:09:16.422: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 17 15:09:16.428: INFO: Pod pod-with-poststart-exec-hook still exists Jan 17 15:09:18.429: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jan 17 15:09:18.436: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:09:18.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-8615" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":137,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:03:03.665: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Jan 17 15:03:03.706: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 create -f -' Jan 17 15:03:05.086: INFO: stderr: "" Jan 17 15:03:05.086: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 17 15:03:05.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:03:05.188: INFO: stderr: "" Jan 17 15:03:05.188: INFO: stdout: "update-demo-nautilus-md5jf update-demo-nautilus-tkvr7 " Jan 17 15:03:05.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 get pods update-demo-nautilus-md5jf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:03:05.274: INFO: stderr: "" Jan 17 15:03:05.274: INFO: stdout: "" Jan 17 15:03:05.274: INFO: update-demo-nautilus-md5jf is created but not running Jan 17 15:03:10.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:03:10.357: INFO: stderr: "" Jan 17 15:03:10.357: INFO: stdout: "update-demo-nautilus-md5jf update-demo-nautilus-tkvr7 " Jan 17 15:03:10.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 get pods update-demo-nautilus-md5jf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:03:10.450: INFO: stderr: "" Jan 17 15:03:10.450: INFO: stdout: "true" Jan 17 15:03:10.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 get pods update-demo-nautilus-md5jf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:03:10.566: INFO: stderr: "" Jan 17 15:03:10.567: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:03:10.567: INFO: validating pod update-demo-nautilus-md5jf Jan 17 15:06:44.439: INFO: update-demo-nautilus-md5jf is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-md5jf) Jan 17 15:06:49.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:06:49.528: INFO: stderr: "" Jan 17 15:06:49.528: INFO: stdout: "update-demo-nautilus-md5jf update-demo-nautilus-tkvr7 " Jan 17 15:06:49.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 get pods update-demo-nautilus-md5jf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:06:49.610: INFO: stderr: "" Jan 17 15:06:49.610: INFO: stdout: "true" Jan 17 15:06:49.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 get pods update-demo-nautilus-md5jf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:06:49.694: INFO: stderr: "" Jan 17 15:06:49.694: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:06:49.694: INFO: validating pod update-demo-nautilus-md5jf Jan 17 15:10:23.572: INFO: update-demo-nautilus-md5jf is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-md5jf) Jan 17 15:10:28.574: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 +0x225 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00070eea0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: using delete to clean up resources Jan 17 15:10:28.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 delete --grace-period=0 --force -f -' Jan 17 15:10:28.786: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 17 15:10:28.786: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 17 15:10:28.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 get rc,svc -l name=update-demo --no-headers' Jan 17 15:10:29.066: INFO: stderr: "No resources found in kubectl-5872 namespace.\n" Jan 17 15:10:29.066: INFO: stdout: "" Jan 17 15:10:29.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5872 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 17 15:10:29.278: INFO: stderr: "" Jan 17 15:10:29.278: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:10:29.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5872" for this suite. �[91m�[1m• Failure [445.649 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294�[0m �[91m�[1mshould create and stop a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 17 15:10:28.574: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":4,"skipped":101,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:10:29.320: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Jan 17 15:10:29.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 create -f -' Jan 17 15:10:32.070: INFO: stderr: "" Jan 17 15:10:32.071: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 17 15:10:32.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:10:32.327: INFO: stderr: "" Jan 17 15:10:32.328: INFO: stdout: "update-demo-nautilus-24dp9 update-demo-nautilus-ljmnv " Jan 17 15:10:32.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods update-demo-nautilus-24dp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:10:32.573: INFO: stderr: "" Jan 17 15:10:32.573: INFO: stdout: "" Jan 17 15:10:32.573: INFO: update-demo-nautilus-24dp9 is created but not running Jan 17 15:10:37.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:10:37.730: INFO: stderr: "" Jan 17 15:10:37.730: INFO: stdout: "update-demo-nautilus-24dp9 update-demo-nautilus-ljmnv " Jan 17 15:10:37.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods update-demo-nautilus-24dp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:10:37.879: INFO: stderr: "" Jan 17 15:10:37.879: INFO: stdout: "true" Jan 17 15:10:37.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods update-demo-nautilus-24dp9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:10:38.054: INFO: stderr: "" Jan 17 15:10:38.054: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:10:38.054: INFO: validating pod update-demo-nautilus-24dp9 Jan 17 15:10:38.068: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:10:38.068: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:10:38.068: INFO: update-demo-nautilus-24dp9 is verified up and running Jan 17 15:10:38.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods update-demo-nautilus-ljmnv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:10:38.292: INFO: stderr: "" Jan 17 15:10:38.293: INFO: stdout: "" Jan 17 15:10:38.293: INFO: update-demo-nautilus-ljmnv is created but not running Jan 17 15:10:43.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:10:43.477: INFO: stderr: "" Jan 17 15:10:43.477: INFO: stdout: "update-demo-nautilus-24dp9 update-demo-nautilus-ljmnv " Jan 17 15:10:43.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods update-demo-nautilus-24dp9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:10:43.644: INFO: stderr: "" Jan 17 15:10:43.644: INFO: stdout: "true" Jan 17 15:10:43.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods update-demo-nautilus-24dp9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:10:43.800: INFO: stderr: "" Jan 17 15:10:43.800: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:10:43.800: INFO: validating pod update-demo-nautilus-24dp9 Jan 17 15:10:43.807: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:10:43.807: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:10:43.807: INFO: update-demo-nautilus-24dp9 is verified up and running Jan 17 15:10:43.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods update-demo-nautilus-ljmnv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:10:43.962: INFO: stderr: "" Jan 17 15:10:43.962: INFO: stdout: "true" Jan 17 15:10:43.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods update-demo-nautilus-ljmnv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:10:44.145: INFO: stderr: "" Jan 17 15:10:44.145: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:10:44.145: INFO: validating pod update-demo-nautilus-ljmnv Jan 17 15:10:44.156: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:10:44.156: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:10:44.156: INFO: update-demo-nautilus-ljmnv is verified up and running �[1mSTEP�[0m: using delete to clean up resources Jan 17 15:10:44.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 delete --grace-period=0 --force -f -' Jan 17 15:10:44.315: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 17 15:10:44.315: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 17 15:10:44.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get rc,svc -l name=update-demo --no-headers' Jan 17 15:10:44.516: INFO: stderr: "No resources found in kubectl-5532 namespace.\n" Jan 17 15:10:44.516: INFO: stdout: "" Jan 17 15:10:44.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5532 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 17 15:10:44.695: INFO: stderr: "" Jan 17 15:10:44.695: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:10:44.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5532" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":5,"skipped":101,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:10:44.734: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Jan 17 15:10:44.811: INFO: Waiting up to 5m0s for pod "security-context-09631f2f-40fb-4ac0-82cc-ec56d07ffa6c" in namespace "security-context-4761" to be "Succeeded or Failed" Jan 17 15:10:44.823: INFO: Pod "security-context-09631f2f-40fb-4ac0-82cc-ec56d07ffa6c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.205385ms Jan 17 15:10:46.837: INFO: Pod "security-context-09631f2f-40fb-4ac0-82cc-ec56d07ffa6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025770384s Jan 17 15:10:48.844: INFO: Pod "security-context-09631f2f-40fb-4ac0-82cc-ec56d07ffa6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032307288s �[1mSTEP�[0m: Saw pod success Jan 17 15:10:48.844: INFO: Pod "security-context-09631f2f-40fb-4ac0-82cc-ec56d07ffa6c" satisfied condition "Succeeded or Failed" Jan 17 15:10:48.850: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-krt1ur pod security-context-09631f2f-40fb-4ac0-82cc-ec56d07ffa6c container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:10:48.900: INFO: Waiting for pod security-context-09631f2f-40fb-4ac0-82cc-ec56d07ffa6c to disappear Jan 17 15:10:48.908: INFO: Pod security-context-09631f2f-40fb-4ac0-82cc-ec56d07ffa6c no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:10:48.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-4761" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":6,"skipped":101,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:10:49.024: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should find a service from listing all namespaces [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching services [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:10:49.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9246" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":-1,"completed":7,"skipped":130,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:10:49.096: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ResourceQuota with best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a ResourceQuota with not best effort scope �[1mSTEP�[0m: Ensuring ResourceQuota status is calculated �[1mSTEP�[0m: Creating a best-effort pod �[1mSTEP�[0m: Ensuring resource quota with best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with not best effort ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage �[1mSTEP�[0m: Creating a not best-effort pod �[1mSTEP�[0m: Ensuring resource quota with not best effort scope captures the pod usage �[1mSTEP�[0m: Ensuring resource quota with best effort scope ignored the pod usage �[1mSTEP�[0m: Deleting the pod �[1mSTEP�[0m: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:05.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-9673" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":-1,"completed":8,"skipped":131,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:05.362: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test substitution in container's args Jan 17 15:11:05.417: INFO: Waiting up to 5m0s for pod "var-expansion-17a12c6c-66a5-49de-b034-f1a356ae1f83" in namespace "var-expansion-1608" to be "Succeeded or Failed" Jan 17 15:11:05.422: INFO: Pod "var-expansion-17a12c6c-66a5-49de-b034-f1a356ae1f83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.847446ms Jan 17 15:11:07.430: INFO: Pod "var-expansion-17a12c6c-66a5-49de-b034-f1a356ae1f83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012290769s Jan 17 15:11:09.437: INFO: Pod "var-expansion-17a12c6c-66a5-49de-b034-f1a356ae1f83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019857426s �[1mSTEP�[0m: Saw pod success Jan 17 15:11:09.437: INFO: Pod "var-expansion-17a12c6c-66a5-49de-b034-f1a356ae1f83" satisfied condition "Succeeded or Failed" Jan 17 15:11:09.446: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod var-expansion-17a12c6c-66a5-49de-b034-f1a356ae1f83 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:11:09.494: INFO: Waiting for pod var-expansion-17a12c6c-66a5-49de-b034-f1a356ae1f83 to disappear Jan 17 15:11:09.498: INFO: Pod var-expansion-17a12c6c-66a5-49de-b034-f1a356ae1f83 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:09.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-1608" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":141,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:09.563: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:11:09.627: INFO: The status of Pod busybox-host-aliases7615b0e0-a755-43e5-a7d6-023703108b7f is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:11:11.634: INFO: The status of Pod busybox-host-aliases7615b0e0-a755-43e5-a7d6-023703108b7f is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:11.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-5557" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":157,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:11.697: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 17 15:11:11.755: INFO: Waiting up to 5m0s for pod "downward-api-854e3d03-436f-4e9a-a975-721b90faf9c7" in namespace "downward-api-4175" to be "Succeeded or Failed" Jan 17 15:11:11.767: INFO: Pod "downward-api-854e3d03-436f-4e9a-a975-721b90faf9c7": Phase="Pending", Reason="", readiness=false. Elapsed: 11.664425ms Jan 17 15:11:13.773: INFO: Pod "downward-api-854e3d03-436f-4e9a-a975-721b90faf9c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018165738s Jan 17 15:11:15.782: INFO: Pod "downward-api-854e3d03-436f-4e9a-a975-721b90faf9c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026434245s �[1mSTEP�[0m: Saw pod success Jan 17 15:11:15.782: INFO: Pod "downward-api-854e3d03-436f-4e9a-a975-721b90faf9c7" satisfied condition "Succeeded or Failed" Jan 17 15:11:15.790: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod downward-api-854e3d03-436f-4e9a-a975-721b90faf9c7 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:11:15.825: INFO: Waiting for pod downward-api-854e3d03-436f-4e9a-a975-721b90faf9c7 to disappear Jan 17 15:11:15.831: INFO: Pod downward-api-854e3d03-436f-4e9a-a975-721b90faf9c7 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:15.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4175" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":159,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:15.922: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename limitrange �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a LimitRange �[1mSTEP�[0m: Setting up watch �[1mSTEP�[0m: Submitting a LimitRange Jan 17 15:11:15.992: INFO: observed the limitRanges list �[1mSTEP�[0m: Verifying LimitRange creation was observed �[1mSTEP�[0m: Fetching the LimitRange to ensure it has proper values Jan 17 15:11:16.001: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 17 15:11:16.001: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with no resource requirements �[1mSTEP�[0m: Ensuring Pod has resource requirements applied from LimitRange Jan 17 15:11:16.015: INFO: Verifying requests: expected map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] with actual map[cpu:{{100 -3} {<nil>} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {<nil>} BinarySI} memory:{{209715200 0} {<nil>} BinarySI}] Jan 17 15:11:16.016: INFO: Verifying limits: expected map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {<nil>} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Creating a Pod with partial resource requirements �[1mSTEP�[0m: Ensuring Pod has merged resource requirements applied from LimitRange Jan 17 15:11:16.032: INFO: Verifying requests: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {<nil>} 150Gi BinarySI} memory:{{157286400 0} {<nil>} 150Mi BinarySI}] Jan 17 15:11:16.032: INFO: Verifying limits: expected map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {<nil>} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {<nil>} 500Gi BinarySI} memory:{{524288000 0} {<nil>} 500Mi BinarySI}] �[1mSTEP�[0m: Failing to create a Pod with less than min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Updating a LimitRange �[1mSTEP�[0m: Verifying LimitRange updating is effective �[1mSTEP�[0m: Creating a Pod with less than former min resources �[1mSTEP�[0m: Failing to create a Pod with more than max resources �[1mSTEP�[0m: Deleting a LimitRange �[1mSTEP�[0m: Verifying the LimitRange was deleted Jan 17 15:11:23.107: INFO: limitRange is already deleted �[1mSTEP�[0m: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:23.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "limitrange-3910" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":-1,"completed":12,"skipped":186,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:07:25.764: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod busybox-ee8251af-4b6b-4f7f-9a1d-54c2e0c5baa3 in namespace container-probe-4390 Jan 17 15:07:27.834: INFO: Started pod busybox-ee8251af-4b6b-4f7f-9a1d-54c2e0c5baa3 in namespace container-probe-4390 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 17 15:07:27.838: INFO: Initial restart count of pod busybox-ee8251af-4b6b-4f7f-9a1d-54c2e0c5baa3 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:28.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-4390" for this suite. �[32m• [SLOW TEST:243.179 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":191,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:28.976: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:11:29.035: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 17 15:11:29.073: INFO: The status of Pod pod-exec-websocket-8761318d-1423-412e-8cbc-5a52f3dea909 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:11:31.082: INFO: The status of Pod pod-exec-websocket-8761318d-1423-412e-8cbc-5a52f3dea909 is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:31.237: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-2792" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":199,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:31.416: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create deployment with httpd image Jan 17 15:11:31.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2992 create -f -' Jan 17 15:11:33.040: INFO: stderr: "" Jan 17 15:11:33.040: INFO: stdout: "deployment.apps/httpd-deployment created\n" �[1mSTEP�[0m: verify diff finds difference between live and declared image Jan 17 15:11:33.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2992 diff -f -' Jan 17 15:11:33.571: INFO: rc: 1 Jan 17 15:11:33.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2992 delete -f -' Jan 17 15:11:33.764: INFO: stderr: "" Jan 17 15:11:33.764: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:33.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2992" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":12,"skipped":235,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:23.176: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Deployment �[1mSTEP�[0m: waiting for Deployment to be created �[1mSTEP�[0m: waiting for all Replicas to be Ready Jan 17 15:11:23.267: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 17 15:11:23.268: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 17 15:11:23.288: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 17 15:11:23.288: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 17 15:11:23.327: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 17 15:11:23.328: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 17 15:11:23.420: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 17 15:11:23.420: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 and labels map[test-deployment-static:true] Jan 17 15:11:24.602: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 17 15:11:24.602: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 and labels map[test-deployment-static:true] Jan 17 15:11:25.273: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 and labels map[test-deployment-static:true] �[1mSTEP�[0m: patching the Deployment Jan 17 15:11:25.306: INFO: observed event type ADDED �[1mSTEP�[0m: waiting for Replicas to scale Jan 17 15:11:25.311: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 Jan 17 15:11:25.311: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 Jan 17 15:11:25.311: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 Jan 17 15:11:25.311: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 Jan 17 15:11:25.311: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 Jan 17 15:11:25.311: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 Jan 17 15:11:25.311: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 Jan 17 15:11:25.312: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 0 Jan 17 15:11:25.312: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 Jan 17 15:11:25.312: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 Jan 17 15:11:25.312: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:25.312: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:25.312: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:25.312: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:25.331: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:25.331: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:25.358: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:25.359: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:25.395: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 Jan 17 15:11:25.395: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 Jan 17 15:11:25.411: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 Jan 17 15:11:25.411: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 Jan 17 15:11:27.316: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:27.317: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:27.358: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 �[1mSTEP�[0m: listing Deployments Jan 17 15:11:27.368: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] �[1mSTEP�[0m: updating the Deployment Jan 17 15:11:27.400: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 �[1mSTEP�[0m: fetching the DeploymentStatus Jan 17 15:11:27.423: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 17 15:11:27.423: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 17 15:11:27.442: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 17 15:11:27.504: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 17 15:11:27.541: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Jan 17 15:11:28.662: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 17 15:11:35.376: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] Jan 17 15:11:35.501: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 17 15:11:35.617: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Jan 17 15:11:41.204: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] �[1mSTEP�[0m: patching the DeploymentStatus �[1mSTEP�[0m: fetching the DeploymentStatus Jan 17 15:11:41.340: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 Jan 17 15:11:41.340: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 Jan 17 15:11:41.340: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 Jan 17 15:11:41.342: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 Jan 17 15:11:41.342: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 1 Jan 17 15:11:41.342: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:41.343: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 3 Jan 17 15:11:41.343: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:41.343: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 2 Jan 17 15:11:41.343: INFO: observed Deployment test-deployment in namespace deployment-9984 with ReadyReplicas 3 �[1mSTEP�[0m: deleting the Deployment Jan 17 15:11:41.374: INFO: observed event type MODIFIED Jan 17 15:11:41.388: INFO: observed event type MODIFIED Jan 17 15:11:41.388: INFO: observed event type MODIFIED Jan 17 15:11:41.390: INFO: observed event type MODIFIED Jan 17 15:11:41.390: INFO: observed event type MODIFIED Jan 17 15:11:41.391: INFO: observed event type MODIFIED Jan 17 15:11:41.391: INFO: observed event type MODIFIED Jan 17 15:11:41.391: INFO: observed event type MODIFIED Jan 17 15:11:41.391: INFO: observed event type MODIFIED Jan 17 15:11:41.392: INFO: observed event type MODIFIED Jan 17 15:11:41.392: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 17 15:11:41.401: INFO: Log out all the ReplicaSets if there is no deployment created Jan 17 15:11:41.417: INFO: ReplicaSet "test-deployment-5ddd8b47d8": &ReplicaSet{ObjectMeta:{test-deployment-5ddd8b47d8 deployment-9984 2f8d6892-fc75-49b1-852c-9a100290e16a 4493 4 2023-01-17 15:11:25 +0000 UTC <nil> <nil> map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment f46aebbb-f97d-413c-b71d-e7849aad92ce 0xc0026bc997 0xc0026bc998}] [] [{kube-controller-manager Update apps/v1 2023-01-17 15:11:25 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f46aebbb-f97d-413c-b71d-e7849aad92ce\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:11:41 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 5ddd8b47d8,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.6 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0026bca20 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 17 15:11:41.432: INFO: pod: "test-deployment-5ddd8b47d8-x4445": &Pod{ObjectMeta:{test-deployment-5ddd8b47d8-x4445 test-deployment-5ddd8b47d8- deployment-9984 7835b061-32bc-4c20-9ea8-88923f9ef50c 4489 0 2023-01-17 15:11:27 +0000 UTC 2023-01-17 15:11:42 +0000 UTC 0xc0025fe100 map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-5ddd8b47d8 2f8d6892-fc75-49b1-852c-9a100290e16a 0xc0025fe137 0xc0025fe138}] [] [{kube-controller-manager Update v1 2023-01-17 15:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"2f8d6892-fc75-49b1-852c-9a100290e16a\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:11:28 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.10\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qp5rr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.6,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qp5rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.10,StartTime:2023-01-17 15:11:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:11:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.6,ImageID:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,ContainerID:containerd://1dfc9dda46efa07efb9782074a815d833a5b7b24d430e2897a85d57444470d11,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:11:41.432: INFO: ReplicaSet "test-deployment-6d7ffcf7fb": &ReplicaSet{ObjectMeta:{test-deployment-6d7ffcf7fb deployment-9984 f98b5b5b-02ef-4622-9468-97693e84fa55 4244 3 2023-01-17 15:11:23 +0000 UTC <nil> <nil> map[pod-template-hash:6d7ffcf7fb test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment f46aebbb-f97d-413c-b71d-e7849aad92ce 0xc0026bca87 0xc0026bca88}] [] [{kube-controller-manager Update apps/v1 2023-01-17 15:11:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f46aebbb-f97d-413c-b71d-e7849aad92ce\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:11:27 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 6d7ffcf7fb,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:6d7ffcf7fb test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0026bcb10 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 17 15:11:41.446: INFO: ReplicaSet "test-deployment-854fdc678": &ReplicaSet{ObjectMeta:{test-deployment-854fdc678 deployment-9984 bf6a3ae9-a2b6-4443-9adf-480b2e0fc90b 4485 2 2023-01-17 15:11:27 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment f46aebbb-f97d-413c-b71d-e7849aad92ce 0xc0026bcb77 0xc0026bcb78}] [] [{kube-controller-manager Update apps/v1 2023-01-17 15:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f46aebbb-f97d-413c-b71d-e7849aad92ce\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:11:35 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 854fdc678,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0026bcc00 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Jan 17 15:11:41.458: INFO: pod: "test-deployment-854fdc678-58tm5": &Pod{ObjectMeta:{test-deployment-854fdc678-58tm5 test-deployment-854fdc678- deployment-9984 6a4d2e16-0d83-4120-92fb-facfeba398c7 4408 0 2023-01-17 15:11:27 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-854fdc678 bf6a3ae9-a2b6-4443-9adf-480b2e0fc90b 0xc0025fecf7 0xc0025fecf8}] [] [{kube-controller-manager Update v1 2023-01-17 15:11:27 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf6a3ae9-a2b6-4443-9adf-480b2e0fc90b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:11:35 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.15\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-nnj8q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nnj8q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-krt1ur,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.15,StartTime:2023-01-17 15:11:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:11:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://e407024e565f843b713a011f52b057a1aa0d6d8998d1954e79d96a22fc3b0472,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:11:41.459: INFO: pod: "test-deployment-854fdc678-trqn2": &Pod{ObjectMeta:{test-deployment-854fdc678-trqn2 test-deployment-854fdc678- deployment-9984 0f65d176-8e8c-4c45-99bd-ae52bce42a01 4484 0 2023-01-17 15:11:35 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-854fdc678 bf6a3ae9-a2b6-4443-9adf-480b2e0fc90b 0xc0025feed7 0xc0025feed8}] [] [{kube-controller-manager Update v1 2023-01-17 15:11:35 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf6a3ae9-a2b6-4443-9adf-480b2e0fc90b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:11:41 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tgr6m,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgr6m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:11:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.1.18,StartTime:2023-01-17 15:11:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:11:40 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://caf01e4ae1331eccf4d41729d934c15750f44ff3d97c35214bbac87fca8f3bee,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:41.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-9984" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":13,"skipped":187,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:41.694: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename lease-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:42.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "lease-test-4247" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":14,"skipped":220,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:33.918: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 17 15:11:35.003: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jan 17 15:11:37.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 17, 15, 11, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 11, 35, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 15, 11, 35, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 11, 35, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 17 15:11:40.055: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:11:40.065: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-6467-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:43.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4350" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4350-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":13,"skipped":269,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:43.647: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status captures replication controller creation �[1mSTEP�[0m: Deleting a ReplicationController �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:11:54.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-3754" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":-1,"completed":14,"skipped":276,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:04:38.660: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Jan 17 15:04:38.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 create -f -' Jan 17 15:04:39.775: INFO: stderr: "" Jan 17 15:04:39.776: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 17 15:04:39.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:04:39.881: INFO: stderr: "" Jan 17 15:04:39.881: INFO: stdout: "update-demo-nautilus-lnpmz update-demo-nautilus-xhfwz " Jan 17 15:04:39.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods update-demo-nautilus-lnpmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:04:39.958: INFO: stderr: "" Jan 17 15:04:39.958: INFO: stdout: "" Jan 17 15:04:39.958: INFO: update-demo-nautilus-lnpmz is created but not running Jan 17 15:04:44.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:04:45.047: INFO: stderr: "" Jan 17 15:04:45.047: INFO: stdout: "update-demo-nautilus-lnpmz update-demo-nautilus-xhfwz " Jan 17 15:04:45.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods update-demo-nautilus-lnpmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:04:45.131: INFO: stderr: "" Jan 17 15:04:45.131: INFO: stdout: "true" Jan 17 15:04:45.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods update-demo-nautilus-lnpmz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:04:45.218: INFO: stderr: "" Jan 17 15:04:45.218: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:04:45.218: INFO: validating pod update-demo-nautilus-lnpmz Jan 17 15:04:45.224: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:04:45.225: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:04:45.225: INFO: update-demo-nautilus-lnpmz is verified up and running Jan 17 15:04:45.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods update-demo-nautilus-xhfwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:04:45.325: INFO: stderr: "" Jan 17 15:04:45.325: INFO: stdout: "true" Jan 17 15:04:45.325: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods update-demo-nautilus-xhfwz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:04:45.415: INFO: stderr: "" Jan 17 15:04:45.415: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:04:45.415: INFO: validating pod update-demo-nautilus-xhfwz Jan 17 15:08:18.643: INFO: update-demo-nautilus-xhfwz is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-xhfwz) Jan 17 15:08:23.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:08:23.733: INFO: stderr: "" Jan 17 15:08:23.734: INFO: stdout: "update-demo-nautilus-lnpmz update-demo-nautilus-xhfwz " Jan 17 15:08:23.734: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods update-demo-nautilus-lnpmz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:08:23.815: INFO: stderr: "" Jan 17 15:08:23.815: INFO: stdout: "true" Jan 17 15:08:23.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods update-demo-nautilus-lnpmz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:08:23.896: INFO: stderr: "" Jan 17 15:08:23.896: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:08:23.896: INFO: validating pod update-demo-nautilus-lnpmz Jan 17 15:08:23.902: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:08:23.902: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:08:23.902: INFO: update-demo-nautilus-lnpmz is verified up and running Jan 17 15:08:23.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods update-demo-nautilus-xhfwz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:08:23.987: INFO: stderr: "" Jan 17 15:08:23.987: INFO: stdout: "true" Jan 17 15:08:23.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods update-demo-nautilus-xhfwz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:08:24.086: INFO: stderr: "" Jan 17 15:08:24.086: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:08:24.086: INFO: validating pod update-demo-nautilus-xhfwz Jan 17 15:11:57.779: INFO: update-demo-nautilus-xhfwz is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-xhfwz) Jan 17 15:12:02.780: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 +0x22f k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0007f6680, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: using delete to clean up resources Jan 17 15:12:02.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 delete --grace-period=0 --force -f -' Jan 17 15:12:03.125: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 17 15:12:03.126: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 17 15:12:03.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get rc,svc -l name=update-demo --no-headers' Jan 17 15:12:03.486: INFO: stderr: "No resources found in kubectl-8787 namespace.\n" Jan 17 15:12:03.486: INFO: stdout: "" Jan 17 15:12:03.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8787 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 17 15:12:03.790: INFO: stderr: "" Jan 17 15:12:03.790: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:12:03.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8787" for this suite. �[91m�[1m• Failure [445.167 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294�[0m �[91m�[1mshould scale a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 17 15:12:02.780: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:54.941: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status captures replicaset creation �[1mSTEP�[0m: Deleting a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:12:06.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-3148" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":15,"skipped":304,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:12:06.122: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:12:06.183: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 17 15:12:13.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7337 --namespace=crd-publish-openapi-7337 create -f -' Jan 17 15:12:15.376: INFO: stderr: "" Jan 17 15:12:15.376: INFO: stdout: "e2e-test-crd-publish-openapi-1529-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 17 15:12:15.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7337 --namespace=crd-publish-openapi-7337 delete e2e-test-crd-publish-openapi-1529-crds test-cr' Jan 17 15:12:15.616: INFO: stderr: "" Jan 17 15:12:15.616: INFO: stdout: "e2e-test-crd-publish-openapi-1529-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" Jan 17 15:12:15.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7337 --namespace=crd-publish-openapi-7337 apply -f -' Jan 17 15:12:16.315: INFO: stderr: "" Jan 17 15:12:16.315: INFO: stdout: "e2e-test-crd-publish-openapi-1529-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" Jan 17 15:12:16.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7337 --namespace=crd-publish-openapi-7337 delete e2e-test-crd-publish-openapi-1529-crds test-cr' Jan 17 15:12:16.522: INFO: stderr: "" Jan 17 15:12:16.522: INFO: stdout: "e2e-test-crd-publish-openapi-1529-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 17 15:12:16.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-7337 explain e2e-test-crd-publish-openapi-1529-crds' Jan 17 15:12:17.011: INFO: stderr: "" Jan 17 15:12:17.011: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-1529-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<>\n Specification of Waldo\n\n status\t<Object>\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:12:20.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7337" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":-1,"completed":16,"skipped":306,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:12:20.656: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 17 15:12:20.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8502 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Jan 17 15:12:20.881: INFO: stderr: "" Jan 17 15:12:20.881: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: replace the image in the pod with server-side dry-run Jan 17 15:12:20.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8502 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' Jan 17 15:12:22.297: INFO: stderr: "" Jan 17 15:12:22.297: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 17 15:12:22.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8502 delete pods e2e-test-httpd-pod' Jan 17 15:12:24.929: INFO: stderr: "" Jan 17 15:12:24.929: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:12:24.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8502" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":17,"skipped":316,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:11:42.096: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should schedule multiple jobs concurrently [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: Ensuring more than one job is running at a time �[1mSTEP�[0m: Ensuring at least two running jobs exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:00.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-2786" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":-1,"completed":15,"skipped":223,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:00.248: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Jan 17 15:13:00.297: INFO: Waiting up to 5m0s for pod "pod-9b73caba-91b4-4429-be84-183b053b2d1f" in namespace "emptydir-9972" to be "Succeeded or Failed" Jan 17 15:13:00.302: INFO: Pod "pod-9b73caba-91b4-4429-be84-183b053b2d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.22222ms Jan 17 15:13:02.308: INFO: Pod "pod-9b73caba-91b4-4429-be84-183b053b2d1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011008493s Jan 17 15:13:04.312: INFO: Pod "pod-9b73caba-91b4-4429-be84-183b053b2d1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015562979s �[1mSTEP�[0m: Saw pod success Jan 17 15:13:04.313: INFO: Pod "pod-9b73caba-91b4-4429-be84-183b053b2d1f" satisfied condition "Succeeded or Failed" Jan 17 15:13:04.317: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9 pod pod-9b73caba-91b4-4429-be84-183b053b2d1f container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:13:04.351: INFO: Waiting for pod pod-9b73caba-91b4-4429-be84-183b053b2d1f to disappear Jan 17 15:13:04.355: INFO: Pod pod-9b73caba-91b4-4429-be84-183b053b2d1f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:04.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-9972" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":225,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:04.381: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Jan 17 15:13:04.426: INFO: Waiting up to 5m0s for pod "pod-c5c31b9d-abe3-478d-b3bc-5e161874ea20" in namespace "emptydir-6269" to be "Succeeded or Failed" Jan 17 15:13:04.431: INFO: Pod "pod-c5c31b9d-abe3-478d-b3bc-5e161874ea20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.292747ms Jan 17 15:13:06.438: INFO: Pod "pod-c5c31b9d-abe3-478d-b3bc-5e161874ea20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011636942s Jan 17 15:13:08.444: INFO: Pod "pod-c5c31b9d-abe3-478d-b3bc-5e161874ea20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017646751s �[1mSTEP�[0m: Saw pod success Jan 17 15:13:08.444: INFO: Pod "pod-c5c31b9d-abe3-478d-b3bc-5e161874ea20" satisfied condition "Succeeded or Failed" Jan 17 15:13:08.448: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod pod-c5c31b9d-abe3-478d-b3bc-5e161874ea20 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:13:08.473: INFO: Waiting for pod pod-c5c31b9d-abe3-478d-b3bc-5e161874ea20 to disappear Jan 17 15:13:08.476: INFO: Pod pod-c5c31b9d-abe3-478d-b3bc-5e161874ea20 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:08.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-6269" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":232,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:08.512: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-4bd04e8d-297f-479f-931d-caf03a5df73a �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-c9358702-41f1-4fb5-b46e-7ac571ce3f43 �[1mSTEP�[0m: Creating the pod Jan 17 15:13:08.558: INFO: The status of Pod pod-configmaps-a156b550-581c-41ff-ad84-5a7bd42b6a06 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:13:10.564: INFO: The status of Pod pod-configmaps-a156b550-581c-41ff-ad84-5a7bd42b6a06 is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-4bd04e8d-297f-479f-931d-caf03a5df73a �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-c9358702-41f1-4fb5-b46e-7ac571ce3f43 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-6f2692b0-2442-4b58-84f8-bc4d47c527cf �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:12.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-1346" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":249,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:12.716: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 17 15:13:12.753: INFO: The status of Pod annotationupdate7a146072-647f-451f-9927-b6640db811c2 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:13:14.760: INFO: The status of Pod annotationupdate7a146072-647f-451f-9927-b6640db811c2 is Running (Ready = true) Jan 17 15:13:15.288: INFO: Successfully updated pod "annotationupdate7a146072-647f-451f-9927-b6640db811c2" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:19.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7586" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":295,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:19.397: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-e4a30463-0fa2-4cbd-8bbf-82ee1ef5f8b8 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 17 15:13:19.452: INFO: Waiting up to 5m0s for pod "pod-secrets-f3754a65-4d51-46af-a599-abcc3f68a16a" in namespace "secrets-9059" to be "Succeeded or Failed" Jan 17 15:13:19.459: INFO: Pod "pod-secrets-f3754a65-4d51-46af-a599-abcc3f68a16a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.820527ms Jan 17 15:13:21.464: INFO: Pod "pod-secrets-f3754a65-4d51-46af-a599-abcc3f68a16a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011474813s Jan 17 15:13:23.468: INFO: Pod "pod-secrets-f3754a65-4d51-46af-a599-abcc3f68a16a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01610808s �[1mSTEP�[0m: Saw pod success Jan 17 15:13:23.468: INFO: Pod "pod-secrets-f3754a65-4d51-46af-a599-abcc3f68a16a" satisfied condition "Succeeded or Failed" Jan 17 15:13:23.474: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod pod-secrets-f3754a65-4d51-46af-a599-abcc3f68a16a container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:13:23.496: INFO: Waiting for pod pod-secrets-f3754a65-4d51-46af-a599-abcc3f68a16a to disappear Jan 17 15:13:23.500: INFO: Pod pod-secrets-f3754a65-4d51-46af-a599-abcc3f68a16a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:23.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-9059" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":325,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:23.531: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:23.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-4663" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":21,"skipped":337,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:12:25.125: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:12:25.177: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating first CR Jan 17 15:12:27.813: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-17T15:12:27Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-17T15:12:27Z]] name:name1 resourceVersion:4887 uid:364b4daf-76fa-4658-bb75-8b9b7a5d072f] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Creating second CR Jan 17 15:12:37.826: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-17T15:12:37Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-17T15:12:37Z]] name:name2 resourceVersion:4915 uid:2b1fc2be-9d76-457a-bcf7-99317a4fdfac] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Modifying first CR Jan 17 15:12:47.839: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-17T15:12:27Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-17T15:12:47Z]] name:name1 resourceVersion:4932 uid:364b4daf-76fa-4658-bb75-8b9b7a5d072f] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Modifying second CR Jan 17 15:12:57.849: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-17T15:12:37Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-17T15:12:57Z]] name:name2 resourceVersion:4949 uid:2b1fc2be-9d76-457a-bcf7-99317a4fdfac] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Deleting first CR Jan 17 15:13:07.856: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-17T15:12:27Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-17T15:12:47Z]] name:name1 resourceVersion:5026 uid:364b4daf-76fa-4658-bb75-8b9b7a5d072f] num:map[num1:9223372036854775807 num2:1000000]]} �[1mSTEP�[0m: Deleting second CR Jan 17 15:13:17.870: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2023-01-17T15:12:37Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2023-01-17T15:12:57Z]] name:name2 resourceVersion:5111 uid:2b1fc2be-9d76-457a-bcf7-99317a4fdfac] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:28.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-watch-3831" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":-1,"completed":18,"skipped":365,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:28.419: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Jan 17 15:13:28.472: INFO: Waiting up to 5m0s for pod "pod-5bf4eed7-440a-40af-bc1c-2ef23cefc726" in namespace "emptydir-1692" to be "Succeeded or Failed" Jan 17 15:13:28.480: INFO: Pod "pod-5bf4eed7-440a-40af-bc1c-2ef23cefc726": Phase="Pending", Reason="", readiness=false. Elapsed: 7.200701ms Jan 17 15:13:30.486: INFO: Pod "pod-5bf4eed7-440a-40af-bc1c-2ef23cefc726": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013382722s Jan 17 15:13:32.490: INFO: Pod "pod-5bf4eed7-440a-40af-bc1c-2ef23cefc726": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017918892s �[1mSTEP�[0m: Saw pod success Jan 17 15:13:32.491: INFO: Pod "pod-5bf4eed7-440a-40af-bc1c-2ef23cefc726" satisfied condition "Succeeded or Failed" Jan 17 15:13:32.494: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9 pod pod-5bf4eed7-440a-40af-bc1c-2ef23cefc726 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:13:32.508: INFO: Waiting for pod pod-5bf4eed7-440a-40af-bc1c-2ef23cefc726 to disappear Jan 17 15:13:32.512: INFO: Pod pod-5bf4eed7-440a-40af-bc1c-2ef23cefc726 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:32.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-1692" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":377,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:23.619: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:13:23.644: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jan 17 15:13:23.653: INFO: Pod name sample-pod: Found 0 pods out of 1 Jan 17 15:13:28.658: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Jan 17 15:13:30.668: INFO: Creating deployment "test-rolling-update-deployment" Jan 17 15:13:30.673: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jan 17 15:13:30.680: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jan 17 15:13:32.688: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jan 17 15:13:32.690: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 17 15:13:32.699: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-8551 b5954fba-cb94-4116-bf24-0302f198266e 5278 1 2023-01-17 15:13:30 +0000 UTC <nil> <nil> map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2023-01-17 15:13:30 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002a56c58 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2023-01-17 15:13:30 +0000 UTC,LastTransitionTime:2023-01-17 15:13:30 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-8656fc4b57" has successfully progressed.,LastUpdateTime:2023-01-17 15:13:31 +0000 UTC,LastTransitionTime:2023-01-17 15:13:30 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 17 15:13:32.703: INFO: New ReplicaSet "test-rolling-update-deployment-8656fc4b57" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-8656fc4b57 deployment-8551 949319aa-594d-455c-98db-7866d96f3ae3 5268 1 2023-01-17 15:13:30 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment b5954fba-cb94-4116-bf24-0302f198266e 0xc0026bcc57 0xc0026bcc58}] [] [{kube-controller-manager Update apps/v1 2023-01-17 15:13:30 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b5954fba-cb94-4116-bf24-0302f198266e\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:13:31 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 8656fc4b57,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.39 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0026bcd08 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 17 15:13:32.703: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jan 17 15:13:32.703: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-8551 c5ab8b34-fdff-4496-a16c-1e39bcbd6ca9 5277 2 2023-01-17 15:13:23 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment b5954fba-cb94-4116-bf24-0302f198266e 0xc0026bcb2f 0xc0026bcb40}] [] [{e2e.test Update apps/v1 2023-01-17 15:13:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:13:31 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b5954fba-cb94-4116-bf24-0302f198266e\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:13:31 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0026bcbf8 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 17 15:13:32.707: INFO: Pod "test-rolling-update-deployment-8656fc4b57-vw8vx" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-8656fc4b57-vw8vx test-rolling-update-deployment-8656fc4b57- deployment-8551 08147543-9341-4f5e-86d4-8537e853a1e5 5267 0 2023-01-17 15:13:30 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:8656fc4b57] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-8656fc4b57 949319aa-594d-455c-98db-7866d96f3ae3 0xc0026bd147 0xc0026bd148}] [] [{kube-controller-manager Update v1 2023-01-17 15:13:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"949319aa-594d-455c-98db-7866d96f3ae3\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:13:31 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hdb8f,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hdb8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-krt1ur,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:13:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:13:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:13:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:13:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.18,StartTime:2023-01-17 15:13:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:13:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.39,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e,ContainerID:containerd://d1dfb71a5b8c6d705fcf51a65f448def097bbe1060878d4b2d42295db5999f83,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:32.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-8551" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":22,"skipped":340,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:32.735: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap that has name configmap-test-emptyKey-9da0d662-6484-41ac-995d-6447545741e7 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:32.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-8979" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":23,"skipped":351,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:32.807: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ConfigMap �[1mSTEP�[0m: fetching the ConfigMap �[1mSTEP�[0m: patching the ConfigMap �[1mSTEP�[0m: listing all ConfigMaps in all namespaces with a label selector �[1mSTEP�[0m: deleting the ConfigMap by collection with a label selector �[1mSTEP�[0m: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:32.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-327" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":24,"skipped":383,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:32.534: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:13:32.554: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:33.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-9549" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":20,"skipped":387,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:32.941: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 17 15:13:32.978: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3b380f6-7ee3-4970-9990-dd71427d2f14" in namespace "projected-3556" to be "Succeeded or Failed" Jan 17 15:13:32.982: INFO: Pod "downwardapi-volume-b3b380f6-7ee3-4970-9990-dd71427d2f14": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152868ms Jan 17 15:13:34.988: INFO: Pod "downwardapi-volume-b3b380f6-7ee3-4970-9990-dd71427d2f14": Phase="Running", Reason="", readiness=false. Elapsed: 2.010765214s Jan 17 15:13:36.995: INFO: Pod "downwardapi-volume-b3b380f6-7ee3-4970-9990-dd71427d2f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017208429s �[1mSTEP�[0m: Saw pod success Jan 17 15:13:36.995: INFO: Pod "downwardapi-volume-b3b380f6-7ee3-4970-9990-dd71427d2f14" satisfied condition "Succeeded or Failed" Jan 17 15:13:36.999: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod downwardapi-volume-b3b380f6-7ee3-4970-9990-dd71427d2f14 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:13:37.019: INFO: Waiting for pod downwardapi-volume-b3b380f6-7ee3-4970-9990-dd71427d2f14 to disappear Jan 17 15:13:37.022: INFO: Pod downwardapi-volume-b3b380f6-7ee3-4970-9990-dd71427d2f14 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:37.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3556" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":427,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:33.614: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-3a9b16fa-6b06-4310-a1a8-1b9448d5958e �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 17 15:13:33.653: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-00d51d6b-9102-4cb0-a811-73b4b216c269" in namespace "projected-1186" to be "Succeeded or Failed" Jan 17 15:13:33.656: INFO: Pod "pod-projected-configmaps-00d51d6b-9102-4cb0-a811-73b4b216c269": Phase="Pending", Reason="", readiness=false. Elapsed: 3.726784ms Jan 17 15:13:35.661: INFO: Pod "pod-projected-configmaps-00d51d6b-9102-4cb0-a811-73b4b216c269": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008096731s Jan 17 15:13:37.668: INFO: Pod "pod-projected-configmaps-00d51d6b-9102-4cb0-a811-73b4b216c269": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015109564s �[1mSTEP�[0m: Saw pod success Jan 17 15:13:37.668: INFO: Pod "pod-projected-configmaps-00d51d6b-9102-4cb0-a811-73b4b216c269" satisfied condition "Succeeded or Failed" Jan 17 15:13:37.671: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9 pod pod-projected-configmaps-00d51d6b-9102-4cb0-a811-73b4b216c269 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:13:37.695: INFO: Waiting for pod pod-projected-configmaps-00d51d6b-9102-4cb0-a811-73b4b216c269 to disappear Jan 17 15:13:37.697: INFO: Pod pod-projected-configmaps-00d51d6b-9102-4cb0-a811-73b4b216c269 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:37.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1186" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":406,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:37.721: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:13:37.756: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2788 create -f -' Jan 17 15:13:39.042: INFO: stderr: "" Jan 17 15:13:39.042: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Jan 17 15:13:39.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2788 create -f -' Jan 17 15:13:39.383: INFO: stderr: "" Jan 17 15:13:39.383: INFO: stdout: "service/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 17 15:13:40.389: INFO: Selector matched 1 pods for map[app:agnhost] Jan 17 15:13:40.389: INFO: Found 0 / 1 Jan 17 15:13:41.390: INFO: Selector matched 1 pods for map[app:agnhost] Jan 17 15:13:41.390: INFO: Found 1 / 1 Jan 17 15:13:41.390: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 17 15:13:41.396: INFO: Selector matched 1 pods for map[app:agnhost] Jan 17 15:13:41.396: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 17 15:13:41.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2788 describe pod agnhost-primary-r4t6x' Jan 17 15:13:41.566: INFO: stderr: "" Jan 17 15:13:41.566: INFO: stdout: "Name: agnhost-primary-r4t6x\nNamespace: kubectl-2788\nPriority: 0\nNode: k8s-upgrade-and-conformance-lxkik8-worker-dueiad/172.18.0.6\nStart Time: Tue, 17 Jan 2023 15:13:39 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 192.168.6.15\nIPs:\n IP: 192.168.6.15\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://1f099b2cbd69d52e77b9ac54776e9768cffaf09ab6ab8a5a9f8b9cc109c59c44\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:7e8bdd271312fd25fc5ff5a8f04727be84044eb3d7d8d03611972a6752e2e11e\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 17 Jan 2023 15:13:39 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-85rhz (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-85rhz:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 2s default-scheduler Successfully assigned kubectl-2788/agnhost-primary-r4t6x to k8s-upgrade-and-conformance-lxkik8-worker-dueiad\n Normal Pulled 2s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.39\" already present on machine\n Normal Created 2s kubelet Created container agnhost-primary\n Normal Started 2s kubelet Started container agnhost-primary\n" Jan 17 15:13:41.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2788 describe rc agnhost-primary' Jan 17 15:13:41.715: INFO: stderr: "" Jan 17 15:13:41.716: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2788\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.39\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 2s replication-controller Created pod: agnhost-primary-r4t6x\n" Jan 17 15:13:41.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2788 describe service agnhost-primary' Jan 17 15:13:41.846: INFO: stderr: "" Jan 17 15:13:41.846: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-2788\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.128.3.217\nIPs: 10.128.3.217\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.6.15:6379\nSession Affinity: None\nEvents: <none>\n" Jan 17 15:13:41.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2788 describe node k8s-upgrade-and-conformance-lxkik8-cqx82-895kk' Jan 17 15:13:42.022: INFO: stderr: "" Jan 17 15:13:42.022: INFO: stdout: "Name: k8s-upgrade-and-conformance-lxkik8-cqx82-895kk\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=k8s-upgrade-and-conformance-lxkik8-cqx82-895kk\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-lxkik8\n cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-8q69yv\n cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-lxkik8-cqx82-895kk\n cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-lxkik8-cqx82\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Tue, 17 Jan 2023 14:58:46 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: k8s-upgrade-and-conformance-lxkik8-cqx82-895kk\n AcquireTime: <unset>\n RenewTime: Tue, 17 Jan 2023 15:13:32 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 17 Jan 2023 15:10:39 +0000 Tue, 17 Jan 2023 14:58:46 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 17 Jan 2023 15:10:39 +0000 Tue, 17 Jan 2023 14:58:46 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 17 Jan 2023 15:10:39 +0000 Tue, 17 Jan 2023 14:58:46 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 17 Jan 2023 15:10:39 +0000 Tue, 17 Jan 2023 15:00:26 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.9\n Hostname: k8s-upgrade-and-conformance-lxkik8-cqx82-895kk\nCapacity:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nAllocatable:\n cpu: 8\n ephemeral-storage: 253869360Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65860692Ki\n pods: 110\nSystem Info:\n Machine ID: b0c2650054e348359487fbe67937322c\n System UUID: 88df9ee9-5a72-4f5d-9d72-e729d139e2bb\n Boot ID: a8202b86-a467-4bde-936b-8367d028b2ee\n Kernel Version: 5.4.0-1081-gke\n OS Image: Ubuntu 21.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.6.4\n Kubelet Version: v1.23.15\n Kube-Proxy Version: v1.23.15\nPodCIDR: 192.168.5.0/24\nPodCIDRs: 192.168.5.0/24\nProviderID: docker:////k8s-upgrade-and-conformance-lxkik8-cqx82-895kk\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-k8s-upgrade-and-conformance-lxkik8-cqx82-895kk 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 14m\n kube-system kindnet-6bgd4 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 14m\n kube-system kube-apiserver-k8s-upgrade-and-conformance-lxkik8-cqx82-895kk 250m (3%) 0 (0%) 0 (0%) 0 (0%) 14m\n kube-system kube-controller-manager-k8s-upgrade-and-conformance-lxkik8-cqx82-895kk 200m (2%) 0 (0%) 0 (0%) 0 (0%) 14m\n kube-system kube-proxy-g6lr5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 13m\n kube-system kube-scheduler-k8s-upgrade-and-conformance-lxkik8-cqx82-895kk 100m (1%) 0 (0%) 0 (0%) 0 (0%) 14m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (9%) 100m (1%)\n memory 150Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 14m kube-proxy \n Normal Starting 13m kube-proxy \n Normal Starting 14m kubelet Starting kubelet.\n Normal NodeHasNoDiskPressure 14m kubelet Node k8s-upgrade-and-conformance-lxkik8-cqx82-895kk status is now: NodeHasNoDiskPressure\n Normal NodeHasSufficientPID 14m kubelet Node k8s-upgrade-and-conformance-lxkik8-cqx82-895kk status is now: NodeHasSufficientPID\n Normal NodeAllocatableEnforced 14m kubelet Updated Node Allocatable limit across pods\n Warning CheckLimitsForResolvConf 14m kubelet Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n Warning InvalidDiskCapacity 14m kubelet invalid capacity 0 on image filesystem\n Normal NodeHasSufficientMemory 14m kubelet Node k8s-upgrade-and-conformance-lxkik8-cqx82-895kk status is now: NodeHasSufficientMemory\n Normal NodeReady 13m kubelet Node k8s-upgrade-and-conformance-lxkik8-cqx82-895kk status is now: NodeReady\n" Jan 17 15:13:42.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-2788 describe namespace kubectl-2788' Jan 17 15:13:42.162: INFO: stderr: "" Jan 17 15:13:42.162: INFO: stdout: "Name: kubectl-2788\nLabels: e2e-framework=kubectl\n e2e-run=bace5da3-da20-4e87-b6ee-ab44c72e3de5\n kubernetes.io/metadata.name=kubectl-2788\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:42.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-2788" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":22,"skipped":413,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:42.200: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 17 15:13:42.238: INFO: The status of Pod annotationupdate755f371f-18cb-4f24-bf28-d5b95d2b2afa is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:13:44.244: INFO: The status of Pod annotationupdate755f371f-18cb-4f24-bf28-d5b95d2b2afa is Running (Ready = true) Jan 17 15:13:44.780: INFO: Successfully updated pod "annotationupdate755f371f-18cb-4f24-bf28-d5b95d2b2afa" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:48.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-1595" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":427,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:48.969: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 17 15:13:49.022: INFO: The status of Pod labelsupdate26c4f515-bc77-4aa9-9d3c-e523e2e79e78 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:13:51.028: INFO: The status of Pod labelsupdate26c4f515-bc77-4aa9-9d3c-e523e2e79e78 is Running (Ready = true) Jan 17 15:13:51.558: INFO: Successfully updated pod "labelsupdate26c4f515-bc77-4aa9-9d3c-e523e2e79e78" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:13:55.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-517" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":501,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:55.642: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7799.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7799.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7799.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7799.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 17 15:14:05.723: INFO: DNS probes using dns-7799/dns-test-a0d62b3a-a638-4b17-8a12-c6b6fee2568b succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:14:05.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-7799" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":25,"skipped":519,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:14:05.786: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 17 15:14:06.331: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 17 15:14:09.359: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that should be mutated �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:14:09.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2185" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2185-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":26,"skipped":531,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:14:09.640: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should validate Deployment Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Deployment Jan 17 15:14:09.694: INFO: Creating simple deployment test-deployment-6gb44 Jan 17 15:14:09.711: INFO: deployment "test-deployment-6gb44" doesn't have the required revision set �[1mSTEP�[0m: Getting /status Jan 17 15:14:11.735: INFO: Deployment test-deployment-6gb44 has Conditions: [{Available True 2023-01-17 15:14:10 +0000 UTC 2023-01-17 15:14:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2023-01-17 15:14:10 +0000 UTC 2023-01-17 15:14:09 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-6gb44-764bc7c4b7" has successfully progressed.}] �[1mSTEP�[0m: updating Deployment Status Jan 17 15:14:11.746: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 15, 14, 10, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 14, 10, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 15, 14, 10, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 14, 9, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-6gb44-764bc7c4b7\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Deployment status to be updated Jan 17 15:14:11.752: INFO: Observed &Deployment event: ADDED Jan 17 15:14:11.752: INFO: Observed Deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-17 15:14:09 +0000 UTC 2023-01-17 15:14:09 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-6gb44-764bc7c4b7"} Jan 17 15:14:11.753: INFO: Observed &Deployment event: MODIFIED Jan 17 15:14:11.753: INFO: Observed Deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-17 15:14:09 +0000 UTC 2023-01-17 15:14:09 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-6gb44-764bc7c4b7"} Jan 17 15:14:11.753: INFO: Observed Deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-17 15:14:09 +0000 UTC 2023-01-17 15:14:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 17 15:14:11.753: INFO: Observed &Deployment event: MODIFIED Jan 17 15:14:11.753: INFO: Observed Deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-17 15:14:09 +0000 UTC 2023-01-17 15:14:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 17 15:14:11.753: INFO: Observed Deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-17 15:14:09 +0000 UTC 2023-01-17 15:14:09 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-6gb44-764bc7c4b7" is progressing.} Jan 17 15:14:11.753: INFO: Observed &Deployment event: MODIFIED Jan 17 15:14:11.753: INFO: Observed Deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-17 15:14:10 +0000 UTC 2023-01-17 15:14:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 17 15:14:11.753: INFO: Observed Deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-17 15:14:10 +0000 UTC 2023-01-17 15:14:09 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-6gb44-764bc7c4b7" has successfully progressed.} Jan 17 15:14:11.754: INFO: Observed &Deployment event: MODIFIED Jan 17 15:14:11.754: INFO: Observed Deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-17 15:14:10 +0000 UTC 2023-01-17 15:14:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 17 15:14:11.754: INFO: Observed Deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-17 15:14:10 +0000 UTC 2023-01-17 15:14:09 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-6gb44-764bc7c4b7" has successfully progressed.} Jan 17 15:14:11.754: INFO: Found Deployment test-deployment-6gb44 in namespace deployment-2432 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Jan 17 15:14:11.754: INFO: Deployment test-deployment-6gb44 has an updated status �[1mSTEP�[0m: patching the Statefulset Status Jan 17 15:14:11.754: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Jan 17 15:14:11.763: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Deployment status to be patched Jan 17 15:14:11.767: INFO: Observed &Deployment event: ADDED Jan 17 15:14:11.767: INFO: Observed deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-17 15:14:09 +0000 UTC 2023-01-17 15:14:09 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-6gb44-764bc7c4b7"} Jan 17 15:14:11.767: INFO: Observed &Deployment event: MODIFIED Jan 17 15:14:11.767: INFO: Observed deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-17 15:14:09 +0000 UTC 2023-01-17 15:14:09 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-6gb44-764bc7c4b7"} Jan 17 15:14:11.767: INFO: Observed deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-17 15:14:09 +0000 UTC 2023-01-17 15:14:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 17 15:14:11.767: INFO: Observed &Deployment event: MODIFIED Jan 17 15:14:11.767: INFO: Observed deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2023-01-17 15:14:09 +0000 UTC 2023-01-17 15:14:09 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Jan 17 15:14:11.767: INFO: Observed deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-17 15:14:09 +0000 UTC 2023-01-17 15:14:09 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-6gb44-764bc7c4b7" is progressing.} Jan 17 15:14:11.767: INFO: Observed &Deployment event: MODIFIED Jan 17 15:14:11.767: INFO: Observed deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-17 15:14:10 +0000 UTC 2023-01-17 15:14:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 17 15:14:11.767: INFO: Observed deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-17 15:14:10 +0000 UTC 2023-01-17 15:14:09 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-6gb44-764bc7c4b7" has successfully progressed.} Jan 17 15:14:11.767: INFO: Observed &Deployment event: MODIFIED Jan 17 15:14:11.767: INFO: Observed deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2023-01-17 15:14:10 +0000 UTC 2023-01-17 15:14:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Jan 17 15:14:11.767: INFO: Observed deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2023-01-17 15:14:10 +0000 UTC 2023-01-17 15:14:09 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-6gb44-764bc7c4b7" has successfully progressed.} Jan 17 15:14:11.767: INFO: Observed deployment test-deployment-6gb44 in namespace deployment-2432 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Jan 17 15:14:11.768: INFO: Observed &Deployment event: MODIFIED Jan 17 15:14:11.768: INFO: Found deployment test-deployment-6gb44 in namespace deployment-2432 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } Jan 17 15:14:11.768: INFO: Deployment test-deployment-6gb44 has a patched status [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 17 15:14:11.775: INFO: Deployment "test-deployment-6gb44": &Deployment{ObjectMeta:{test-deployment-6gb44 deployment-2432 07e01365-111d-4b76-911e-2eb7ec38f1bb 5760 1 2023-01-17 15:14:09 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2023-01-17 15:14:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:14:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2023-01-17 15:14:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002a4af88 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jan 17 15:14:11.779: INFO: New ReplicaSet "test-deployment-6gb44-764bc7c4b7" of Deployment "test-deployment-6gb44": &ReplicaSet{ObjectMeta:{test-deployment-6gb44-764bc7c4b7 deployment-2432 71f6ad06-88c3-4ce7-89c5-40bf3bd78702 5745 1 2023-01-17 15:14:09 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-6gb44 07e01365-111d-4b76-911e-2eb7ec38f1bb 0xc002a4b3f7 0xc002a4b3f8}] [] [{kube-controller-manager Update apps/v1 2023-01-17 15:14:09 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"07e01365-111d-4b76-911e-2eb7ec38f1bb\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:14:10 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 764bc7c4b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002a4b4d8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jan 17 15:14:11.784: INFO: Pod "test-deployment-6gb44-764bc7c4b7-z7ftp" is available: &Pod{ObjectMeta:{test-deployment-6gb44-764bc7c4b7-z7ftp test-deployment-6gb44-764bc7c4b7- deployment-2432 df56d51f-1640-4757-b526-b1b41b82e481 5743 0 2023-01-17 15:14:09 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [{apps/v1 ReplicaSet test-deployment-6gb44-764bc7c4b7 71f6ad06-88c3-4ce7-89c5-40bf3bd78702 0xc002a4b967 0xc002a4b968}] [] [{kube-controller-manager Update v1 2023-01-17 15:14:09 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71f6ad06-88c3-4ce7-89c5-40bf3bd78702\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:14:10 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.18\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-2nm2x,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2nm2x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-dueiad,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:14:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:14:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:14:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:14:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.18,StartTime:2023-01-17 15:14:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:14:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://5813591f1e5054fd93d0f19a0aaed4d3dd70a53736be118333d6a092c8d6bcfd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.18,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:14:11.785: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-2432" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":27,"skipped":533,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:09:18.520: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule jobs when suspended [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a suspended cronjob �[1mSTEP�[0m: Ensuring no jobs are scheduled �[1mSTEP�[0m: Ensuring no job exists by listing jobs explicitly �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:14:18.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-7020" for this suite. �[32m• [SLOW TEST:300.097 seconds]�[0m [sig-apps] CronJob �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m should not schedule jobs when suspended [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":-1,"completed":3,"skipped":163,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:14:18.620: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 17 15:14:18.659: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2e0b2858-2064-4104-84f5-e59a0d8fb3f8" in namespace "downward-api-8374" to be "Succeeded or Failed" Jan 17 15:14:18.664: INFO: Pod "downwardapi-volume-2e0b2858-2064-4104-84f5-e59a0d8fb3f8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.127686ms Jan 17 15:14:20.671: INFO: Pod "downwardapi-volume-2e0b2858-2064-4104-84f5-e59a0d8fb3f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011796528s Jan 17 15:14:22.676: INFO: Pod "downwardapi-volume-2e0b2858-2064-4104-84f5-e59a0d8fb3f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017392652s �[1mSTEP�[0m: Saw pod success Jan 17 15:14:22.676: INFO: Pod "downwardapi-volume-2e0b2858-2064-4104-84f5-e59a0d8fb3f8" satisfied condition "Succeeded or Failed" Jan 17 15:14:22.681: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod downwardapi-volume-2e0b2858-2064-4104-84f5-e59a0d8fb3f8 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:14:22.706: INFO: Waiting for pod downwardapi-volume-2e0b2858-2064-4104-84f5-e59a0d8fb3f8 to disappear Jan 17 15:14:22.711: INFO: Pod downwardapi-volume-2e0b2858-2064-4104-84f5-e59a0d8fb3f8 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:14:22.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8374" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":163,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:14:22.776: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 17 15:14:23.218: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 17 15:14:25.232: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.January, 17, 15, 14, 23, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 14, 23, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.January, 17, 15, 14, 23, 0, time.Local), LastTransitionTime:time.Date(2023, time.January, 17, 15, 14, 23, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-6c69dbd86b\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 17 15:14:28.255: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Setting timeout (1s) shorter than webhook latency (5s) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Request fails when timeout (1s) is shorter than slow webhook latency (5s) �[1mSTEP�[0m: Having no error when timeout is shorter than webhook latency and failure policy is ignore �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is longer than webhook latency �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is empty (defaulted to 10s in v1) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:14:40.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4920" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4920-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":5,"skipped":182,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:14:40.527: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating replication controller my-hostname-basic-56d5312d-aa38-412f-9168-40f077eaad76 Jan 17 15:14:40.600: INFO: Pod name my-hostname-basic-56d5312d-aa38-412f-9168-40f077eaad76: Found 0 pods out of 1 Jan 17 15:14:45.610: INFO: Pod name my-hostname-basic-56d5312d-aa38-412f-9168-40f077eaad76: Found 1 pods out of 1 Jan 17 15:14:45.610: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-56d5312d-aa38-412f-9168-40f077eaad76" are running Jan 17 15:14:45.614: INFO: Pod "my-hostname-basic-56d5312d-aa38-412f-9168-40f077eaad76-xc9kd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-17 15:14:40 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-17 15:14:41 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-17 15:14:41 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2023-01-17 15:14:40 +0000 UTC Reason: Message:}]) Jan 17 15:14:45.614: INFO: Trying to dial the pod Jan 17 15:14:50.629: INFO: Controller my-hostname-basic-56d5312d-aa38-412f-9168-40f077eaad76: Got expected result from replica 1 [my-hostname-basic-56d5312d-aa38-412f-9168-40f077eaad76-xc9kd]: "my-hostname-basic-56d5312d-aa38-412f-9168-40f077eaad76-xc9kd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:14:50.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-27" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":6,"skipped":187,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:14:50.698: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-c69688bc-9a30-4d96-89b3-015e59c5ceee �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Waiting for pod with text data �[1mSTEP�[0m: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:14:52.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9777" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":213,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:14:52.853: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a watch on configmaps �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: closing the watch once it receives two notifications Jan 17 15:14:52.905: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6851 e8ef06a3-8d0f-4497-a82a-f192b2b71197 6024 0 2023-01-17 15:14:52 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-17 15:14:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 17 15:14:52.905: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6851 e8ef06a3-8d0f-4497-a82a-f192b2b71197 6026 0 2023-01-17 15:14:52 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-17 15:14:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time, while the watch is closed �[1mSTEP�[0m: creating a new watch on configmaps from the last resource version observed by the first watch �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap since the first watch closed Jan 17 15:14:52.924: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6851 e8ef06a3-8d0f-4497-a82a-f192b2b71197 6027 0 2023-01-17 15:14:52 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-17 15:14:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 17 15:14:52.924: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6851 e8ef06a3-8d0f-4497-a82a-f192b2b71197 6028 0 2023-01-17 15:14:52 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2023-01-17 15:14:52 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:14:52.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-6851" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":8,"skipped":236,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:14:52.949: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating secret secrets-7593/secret-test-b4983d73-fceb-4a83-8537-20717ba33982 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 17 15:14:52.991: INFO: Waiting up to 5m0s for pod "pod-configmaps-31e96abc-1555-4d66-a6ad-5421ed8af6bc" in namespace "secrets-7593" to be "Succeeded or Failed" Jan 17 15:14:52.994: INFO: Pod "pod-configmaps-31e96abc-1555-4d66-a6ad-5421ed8af6bc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.902064ms Jan 17 15:14:55.002: INFO: Pod "pod-configmaps-31e96abc-1555-4d66-a6ad-5421ed8af6bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011591647s Jan 17 15:14:57.007: INFO: Pod "pod-configmaps-31e96abc-1555-4d66-a6ad-5421ed8af6bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016693873s �[1mSTEP�[0m: Saw pod success Jan 17 15:14:57.007: INFO: Pod "pod-configmaps-31e96abc-1555-4d66-a6ad-5421ed8af6bc" satisfied condition "Succeeded or Failed" Jan 17 15:14:57.011: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod pod-configmaps-31e96abc-1555-4d66-a6ad-5421ed8af6bc container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:14:57.029: INFO: Waiting for pod pod-configmaps-31e96abc-1555-4d66-a6ad-5421ed8af6bc to disappear Jan 17 15:14:57.032: INFO: Pod pod-configmaps-31e96abc-1555-4d66-a6ad-5421ed8af6bc no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:14:57.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-7593" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":9,"skipped":242,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:14:57.074: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:14:57.110: INFO: Waiting up to 5m0s for pod "busybox-user-65534-b5c4068d-1281-4df5-88c7-c5743e58bc4e" in namespace "security-context-test-7400" to be "Succeeded or Failed" Jan 17 15:14:57.117: INFO: Pod "busybox-user-65534-b5c4068d-1281-4df5-88c7-c5743e58bc4e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.015961ms Jan 17 15:14:59.124: INFO: Pod "busybox-user-65534-b5c4068d-1281-4df5-88c7-c5743e58bc4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013606053s Jan 17 15:15:01.129: INFO: Pod "busybox-user-65534-b5c4068d-1281-4df5-88c7-c5743e58bc4e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.019453642s Jan 17 15:15:03.135: INFO: Pod "busybox-user-65534-b5c4068d-1281-4df5-88c7-c5743e58bc4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024736239s Jan 17 15:15:03.135: INFO: Pod "busybox-user-65534-b5c4068d-1281-4df5-88c7-c5743e58bc4e" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:15:03.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-7400" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":258,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:15:03.159: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:15:03.208: INFO: The status of Pod server-envvars-c182aa1c-a8cb-4699-8388-4d1837733b63 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:15:05.215: INFO: The status of Pod server-envvars-c182aa1c-a8cb-4699-8388-4d1837733b63 is Running (Ready = true) Jan 17 15:15:05.242: INFO: Waiting up to 5m0s for pod "client-envvars-f85a39eb-4c17-49d2-90f8-917b79d37098" in namespace "pods-4500" to be "Succeeded or Failed" Jan 17 15:15:05.251: INFO: Pod "client-envvars-f85a39eb-4c17-49d2-90f8-917b79d37098": Phase="Pending", Reason="", readiness=false. Elapsed: 8.681663ms Jan 17 15:15:07.259: INFO: Pod "client-envvars-f85a39eb-4c17-49d2-90f8-917b79d37098": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016419971s Jan 17 15:15:09.264: INFO: Pod "client-envvars-f85a39eb-4c17-49d2-90f8-917b79d37098": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022220859s �[1mSTEP�[0m: Saw pod success Jan 17 15:15:09.264: INFO: Pod "client-envvars-f85a39eb-4c17-49d2-90f8-917b79d37098" satisfied condition "Succeeded or Failed" Jan 17 15:15:09.268: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod client-envvars-f85a39eb-4c17-49d2-90f8-917b79d37098 container env3cont: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:15:09.285: INFO: Waiting for pod client-envvars-f85a39eb-4c17-49d2-90f8-917b79d37098 to disappear Jan 17 15:15:09.289: INFO: Pod client-envvars-f85a39eb-4c17-49d2-90f8-917b79d37098 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:15:09.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-4500" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":263,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:15:09.301: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-643e525f-8e10-4d45-bffa-48897eda1cc5 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 17 15:15:09.342: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2f72b770-6abe-4ed1-926c-80946c4fa8b2" in namespace "projected-7928" to be "Succeeded or Failed" Jan 17 15:15:09.344: INFO: Pod "pod-projected-configmaps-2f72b770-6abe-4ed1-926c-80946c4fa8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.709057ms Jan 17 15:15:11.349: INFO: Pod "pod-projected-configmaps-2f72b770-6abe-4ed1-926c-80946c4fa8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007715065s Jan 17 15:15:13.355: INFO: Pod "pod-projected-configmaps-2f72b770-6abe-4ed1-926c-80946c4fa8b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012985407s �[1mSTEP�[0m: Saw pod success Jan 17 15:15:13.355: INFO: Pod "pod-projected-configmaps-2f72b770-6abe-4ed1-926c-80946c4fa8b2" satisfied condition "Succeeded or Failed" Jan 17 15:15:13.357: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod pod-projected-configmaps-2f72b770-6abe-4ed1-926c-80946c4fa8b2 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:15:13.373: INFO: Waiting for pod pod-projected-configmaps-2f72b770-6abe-4ed1-926c-80946c4fa8b2 to disappear Jan 17 15:15:13.376: INFO: Pod pod-projected-configmaps-2f72b770-6abe-4ed1-926c-80946c4fa8b2 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:15:13.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7928" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":263,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:15:13.429: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 17 15:15:13.805: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 17 15:15:16.826: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Creating a dummy validating-webhook-configuration object �[1mSTEP�[0m: Deleting the validating-webhook-configuration, which should be possible to remove �[1mSTEP�[0m: Creating a dummy mutating-webhook-configuration object �[1mSTEP�[0m: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:15:17.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1173" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1173-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":13,"skipped":295,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:15:18.040: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test externalName service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4994.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4994.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 17 15:15:20.161: INFO: DNS probes using dns-test-985cd025-c38b-4041-b3da-1a15df8f9344 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the externalName to bar.example.com �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4994.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4994.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a second pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 17 15:15:22.214: INFO: File wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:22.218: INFO: File jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:22.218: INFO: Lookups using dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a failed for: [wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local] Jan 17 15:15:27.224: INFO: File wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:27.229: INFO: File jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:27.229: INFO: Lookups using dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a failed for: [wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local] Jan 17 15:15:32.224: INFO: File wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:32.229: INFO: File jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:32.229: INFO: Lookups using dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a failed for: [wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local] Jan 17 15:15:37.225: INFO: File wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:37.231: INFO: File jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:37.231: INFO: Lookups using dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a failed for: [wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local] Jan 17 15:15:42.224: INFO: File wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:42.228: INFO: File jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:42.228: INFO: Lookups using dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a failed for: [wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local] Jan 17 15:15:47.224: INFO: File wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:47.230: INFO: File jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a contains 'foo.example.com. ' instead of 'bar.example.com.' Jan 17 15:15:47.230: INFO: Lookups using dns-4994/dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a failed for: [wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local] Jan 17 15:15:52.238: INFO: DNS probes using dns-test-ab45199d-b448-4214-b10c-91858fcf6b5a succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: changing the service to type=ClusterIP �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4994.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4994.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4994.svc.cluster.local; sleep 1; done �[1mSTEP�[0m: creating a third pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 17 15:15:54.372: INFO: File wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local from pod dns-4994/dns-test-06dcc102-9224-4b2e-8c8e-59647da8551f contains '' instead of '10.130.184.142' Jan 17 15:15:54.379: INFO: Lookups using dns-4994/dns-test-06dcc102-9224-4b2e-8c8e-59647da8551f failed for: [wheezy_udp@dns-test-service-3.dns-4994.svc.cluster.local] Jan 17 15:15:59.389: INFO: DNS probes using dns-test-06dcc102-9224-4b2e-8c8e-59647da8551f succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:15:59.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-4994" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":-1,"completed":14,"skipped":304,"failed":2,"failures":["[sig-node] PreStop should call prestop when killing a pod [Conformance]","[sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:14:11.831: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should replace jobs when ReplaceConcurrent [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ReplaceConcurrent cronjob �[1mSTEP�[0m: Ensuring a job is scheduled �[1mSTEP�[0m: Ensuring exactly one is scheduled �[1mSTEP�[0m: Ensuring exactly one running job exists by listing jobs explicitly �[1mSTEP�[0m: Ensuring the job is replaced with a new one �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:16:01.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-4109" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":-1,"completed":28,"skipped":550,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:16:01.969: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-6292 [It] should list, patch and delete a collection of StatefulSets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:16:02.019: INFO: Found 0 stateful pods, waiting for 1 Jan 17 15:16:12.025: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: patching the StatefulSet Jan 17 15:16:12.061: INFO: Found 1 stateful pods, waiting for 2 Jan 17 15:16:22.067: INFO: Waiting for pod test-ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 17 15:16:22.067: INFO: Waiting for pod test-ss-1 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Listing all StatefulSets �[1mSTEP�[0m: Delete all of the StatefulSets �[1mSTEP�[0m: Verify that StatefulSets have been deleted [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 17 15:16:22.092: INFO: Deleting all statefulset in ns statefulset-6292 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:16:22.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6292" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should list, patch and delete a collection of StatefulSets [Conformance]","total":-1,"completed":29,"skipped":584,"failed":0} [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:16:22.133: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Jan 17 15:16:22.192: INFO: The status of Pod pod-update-e2976de3-759e-4e7d-85cf-13eda88cb07f is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:16:24.199: INFO: The status of Pod pod-update-e2976de3-759e-4e7d-85cf-13eda88cb07f is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Jan 17 15:16:24.718: INFO: Successfully updated pod "pod-update-e2976de3-759e-4e7d-85cf-13eda88cb07f" �[1mSTEP�[0m: verifying the updated pod is in kubernetes Jan 17 15:16:24.728: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:16:24.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-1855" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":584,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:16:24.871: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename proxy �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] A set of valid responses are returned for both pod and service ProxyWithPath [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:16:24.910: INFO: Creating pod... Jan 17 15:16:24.928: INFO: Pod Quantity: 1 Status: Pending Jan 17 15:16:25.934: INFO: Pod Quantity: 1 Status: Pending Jan 17 15:16:26.933: INFO: Pod Status: Running Jan 17 15:16:26.933: INFO: Creating service... Jan 17 15:16:26.943: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/pods/agnhost/proxy/some/path/with/DELETE Jan 17 15:16:26.956: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jan 17 15:16:26.956: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/pods/agnhost/proxy/some/path/with/GET Jan 17 15:16:26.969: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jan 17 15:16:26.969: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/pods/agnhost/proxy/some/path/with/HEAD Jan 17 15:16:26.976: INFO: http.Client request:HEAD | StatusCode:200 Jan 17 15:16:26.976: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/pods/agnhost/proxy/some/path/with/OPTIONS Jan 17 15:16:26.984: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jan 17 15:16:26.984: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/pods/agnhost/proxy/some/path/with/PATCH Jan 17 15:16:26.989: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jan 17 15:16:26.989: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/pods/agnhost/proxy/some/path/with/POST Jan 17 15:16:26.998: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jan 17 15:16:26.998: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/pods/agnhost/proxy/some/path/with/PUT Jan 17 15:16:27.004: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT Jan 17 15:16:27.004: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/services/test-service/proxy/some/path/with/DELETE Jan 17 15:16:27.015: INFO: http.Client request:DELETE | StatusCode:200 | Response:foo | Method:DELETE Jan 17 15:16:27.015: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/services/test-service/proxy/some/path/with/GET Jan 17 15:16:27.026: INFO: http.Client request:GET | StatusCode:200 | Response:foo | Method:GET Jan 17 15:16:27.026: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/services/test-service/proxy/some/path/with/HEAD Jan 17 15:16:27.036: INFO: http.Client request:HEAD | StatusCode:200 Jan 17 15:16:27.036: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/services/test-service/proxy/some/path/with/OPTIONS Jan 17 15:16:27.048: INFO: http.Client request:OPTIONS | StatusCode:200 | Response:foo | Method:OPTIONS Jan 17 15:16:27.049: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/services/test-service/proxy/some/path/with/PATCH Jan 17 15:16:27.056: INFO: http.Client request:PATCH | StatusCode:200 | Response:foo | Method:PATCH Jan 17 15:16:27.056: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/services/test-service/proxy/some/path/with/POST Jan 17 15:16:27.064: INFO: http.Client request:POST | StatusCode:200 | Response:foo | Method:POST Jan 17 15:16:27.064: INFO: Starting http.Client for https://172.18.0.3:6443/api/v1/namespaces/proxy-6355/services/test-service/proxy/some/path/with/PUT Jan 17 15:16:27.074: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT [AfterEach] version v1 /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:16:27.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "proxy-6355" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":-1,"completed":31,"skipped":656,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:16:27.145: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting the auto-created API token Jan 17 15:16:27.711: INFO: created pod pod-service-account-defaultsa Jan 17 15:16:27.712: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jan 17 15:16:27.716: INFO: created pod pod-service-account-mountsa Jan 17 15:16:27.716: INFO: pod pod-service-account-mountsa service account token volume mount: true Jan 17 15:16:27.728: INFO: created pod pod-service-account-nomountsa Jan 17 15:16:27.728: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jan 17 15:16:27.734: INFO: created pod pod-service-account-defaultsa-mountspec Jan 17 15:16:27.736: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jan 17 15:16:27.752: INFO: created pod pod-service-account-mountsa-mountspec Jan 17 15:16:27.752: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jan 17 15:16:27.760: INFO: created pod pod-service-account-nomountsa-mountspec Jan 17 15:16:27.760: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jan 17 15:16:27.776: INFO: created pod pod-service-account-defaultsa-nomountspec Jan 17 15:16:27.776: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jan 17 15:16:27.794: INFO: created pod pod-service-account-mountsa-nomountspec Jan 17 15:16:27.794: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jan 17 15:16:27.801: INFO: created pod pod-service-account-nomountsa-nomountspec Jan 17 15:16:27.801: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:16:27.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-4710" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":32,"skipped":666,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:16:27.923: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap configmap-6377/configmap-test-107a3246-b991-467e-8cfa-9b7c7ca91c27 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 17 15:16:28.011: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4cb8ab4-1db9-4ba9-a0f7-f57283cecd9e" in namespace "configmap-6377" to be "Succeeded or Failed" Jan 17 15:16:28.022: INFO: Pod "pod-configmaps-d4cb8ab4-1db9-4ba9-a0f7-f57283cecd9e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.571056ms Jan 17 15:16:30.027: INFO: Pod "pod-configmaps-d4cb8ab4-1db9-4ba9-a0f7-f57283cecd9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016545723s Jan 17 15:16:32.033: INFO: Pod "pod-configmaps-d4cb8ab4-1db9-4ba9-a0f7-f57283cecd9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022413523s Jan 17 15:16:34.041: INFO: Pod "pod-configmaps-d4cb8ab4-1db9-4ba9-a0f7-f57283cecd9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029782737s �[1mSTEP�[0m: Saw pod success Jan 17 15:16:34.041: INFO: Pod "pod-configmaps-d4cb8ab4-1db9-4ba9-a0f7-f57283cecd9e" satisfied condition "Succeeded or Failed" Jan 17 15:16:34.049: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod pod-configmaps-d4cb8ab4-1db9-4ba9-a0f7-f57283cecd9e container env-test: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:16:34.100: INFO: Waiting for pod pod-configmaps-d4cb8ab4-1db9-4ba9-a0f7-f57283cecd9e to disappear Jan 17 15:16:34.106: INFO: Pod pod-configmaps-d4cb8ab4-1db9-4ba9-a0f7-f57283cecd9e no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:16:34.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6377" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":695,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:16:34.139: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 17 15:16:34.185: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8be60854-e767-4aa3-847d-05ff5f81a375" in namespace "projected-9945" to be "Succeeded or Failed" Jan 17 15:16:34.190: INFO: Pod "downwardapi-volume-8be60854-e767-4aa3-847d-05ff5f81a375": Phase="Pending", Reason="", readiness=false. Elapsed: 5.577848ms Jan 17 15:16:36.198: INFO: Pod "downwardapi-volume-8be60854-e767-4aa3-847d-05ff5f81a375": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013416119s Jan 17 15:16:38.205: INFO: Pod "downwardapi-volume-8be60854-e767-4aa3-847d-05ff5f81a375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020209188s �[1mSTEP�[0m: Saw pod success Jan 17 15:16:38.205: INFO: Pod "downwardapi-volume-8be60854-e767-4aa3-847d-05ff5f81a375" satisfied condition "Succeeded or Failed" Jan 17 15:16:38.211: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9 pod downwardapi-volume-8be60854-e767-4aa3-847d-05ff5f81a375 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:16:38.249: INFO: Waiting for pod downwardapi-volume-8be60854-e767-4aa3-847d-05ff5f81a375 to disappear Jan 17 15:16:38.256: INFO: Pod downwardapi-volume-8be60854-e767-4aa3-847d-05ff5f81a375 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:16:38.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9945" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":703,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:16:38.286: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should add annotations for pods in rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Jan 17 15:16:38.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7203 create -f -' Jan 17 15:16:38.612: INFO: stderr: "" Jan 17 15:16:38.612: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Jan 17 15:16:39.621: INFO: Selector matched 1 pods for map[app:agnhost] Jan 17 15:16:39.622: INFO: Found 0 / 1 Jan 17 15:16:40.619: INFO: Selector matched 1 pods for map[app:agnhost] Jan 17 15:16:40.619: INFO: Found 1 / 1 Jan 17 15:16:40.619: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 �[1mSTEP�[0m: patching all pods Jan 17 15:16:40.625: INFO: Selector matched 1 pods for map[app:agnhost] Jan 17 15:16:40.625: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 17 15:16:40.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7203 patch pod agnhost-primary-fhtgw -p {"metadata":{"annotations":{"x":"y"}}}' Jan 17 15:16:40.751: INFO: stderr: "" Jan 17 15:16:40.751: INFO: stdout: "pod/agnhost-primary-fhtgw patched\n" �[1mSTEP�[0m: checking annotations Jan 17 15:16:40.758: INFO: Selector matched 1 pods for map[app:agnhost] Jan 17 15:16:40.759: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:16:40.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7203" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":-1,"completed":35,"skipped":708,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:16:40.974: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:16:43.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-9726" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":805,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:16:43.045: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-4493 �[1mSTEP�[0m: changing the ExternalName service to type=NodePort �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-4493 I0117 15:16:43.102665 18 runners.go:193] Created replication controller with name: externalname-service, namespace: services-4493, replica count: 2 I0117 15:16:46.154771 18 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 17 15:16:46.154: INFO: Creating new exec pod Jan 17 15:16:49.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4493 exec execpodklq4s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 17 15:16:51.375: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 17 15:16:51.376: INFO: stdout: "" Jan 17 15:16:52.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4493 exec execpodklq4s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 17 15:16:54.583: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 17 15:16:54.583: INFO: stdout: "" Jan 17 15:16:55.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4493 exec execpodklq4s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 17 15:16:55.535: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 17 15:16:55.535: INFO: stdout: "" Jan 17 15:16:56.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4493 exec execpodklq4s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Jan 17 15:16:56.573: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Jan 17 15:16:56.573: INFO: stdout: "externalname-service-g6bdp" Jan 17 15:16:56.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4493 exec execpodklq4s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.53.26 80' Jan 17 15:16:58.750: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.130.53.26 80\nConnection to 10.130.53.26 80 port [tcp/http] succeeded!\n" Jan 17 15:16:58.750: INFO: stdout: "" Jan 17 15:16:59.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4493 exec execpodklq4s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.53.26 80' Jan 17 15:16:59.934: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.130.53.26 80\nConnection to 10.130.53.26 80 port [tcp/http] succeeded!\n" Jan 17 15:16:59.934: INFO: stdout: "externalname-service-g6bdp" Jan 17 15:16:59.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4493 exec execpodklq4s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 31482' Jan 17 15:17:02.111: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 31482\nConnection to 172.18.0.5 31482 port [tcp/*] succeeded!\n" Jan 17 15:17:02.111: INFO: stdout: "" Jan 17 15:17:03.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4493 exec execpodklq4s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 31482' Jan 17 15:17:05.312: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 31482\nConnection to 172.18.0.5 31482 port [tcp/*] succeeded!\n" Jan 17 15:17:05.312: INFO: stdout: "" Jan 17 15:17:06.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4493 exec execpodklq4s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.5 31482' Jan 17 15:17:06.298: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.5 31482\nConnection to 172.18.0.5 31482 port [tcp/*] succeeded!\n" Jan 17 15:17:06.298: INFO: stdout: "externalname-service-g6bdp" Jan 17 15:17:06.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-4493 exec execpodklq4s -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.7 31482' Jan 17 15:17:06.459: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.7 31482\nConnection to 172.18.0.7 31482 port [tcp/*] succeeded!\n" Jan 17 15:17:06.459: INFO: stdout: "externalname-service-g6bdp" Jan 17 15:17:06.459: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:17:06.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-4493" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":37,"skipped":810,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:17:06.510: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating Pod �[1mSTEP�[0m: Reading file content from the nginx-container Jan 17 15:17:08.575: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5121 PodName:pod-sharedvolume-4e6379cf-8b39-41f1-b198-849529d97aa6 ContainerName:busybox-main-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 17 15:17:08.575: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 17 15:17:08.575: INFO: ExecWithOptions: Clientset creation Jan 17 15:17:08.575: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/emptydir-5121/pods/pod-sharedvolume-4e6379cf-8b39-41f1-b198-849529d97aa6/exec?command=%2Fbin%2Fsh&command=-c&command=cat+%2Fusr%2Fshare%2Fvolumeshare%2Fshareddata.txt&container=busybox-main-container&container=busybox-main-container&stderr=true&stdout=true %!s(MISSING)) Jan 17 15:17:08.663: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:17:08.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5121" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":-1,"completed":38,"skipped":816,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:17:08.679: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 17 15:17:09.196: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 17 15:17:12.218: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod �[1mSTEP�[0m: 'kubectl attach' the pod, should be denied by the webhook Jan 17 15:17:14.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=webhook-8591 attach --namespace=webhook-8591 to-be-attached-pod -i -c=container1' Jan 17 15:17:14.352: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:17:14.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8591" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8591-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":39,"skipped":818,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:17:14.493: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:17:24.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-8017" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":40,"skipped":845,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:17:24.573: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-ff3dfec2-24f7-4464-b29f-e0a2626a20dc �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 17 15:17:24.616: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50009f0f-b942-4739-8e83-ebd2225757e0" in namespace "projected-8421" to be "Succeeded or Failed" Jan 17 15:17:24.621: INFO: Pod "pod-projected-configmaps-50009f0f-b942-4739-8e83-ebd2225757e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.638785ms Jan 17 15:17:26.629: INFO: Pod "pod-projected-configmaps-50009f0f-b942-4739-8e83-ebd2225757e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0124549s Jan 17 15:17:28.633: INFO: Pod "pod-projected-configmaps-50009f0f-b942-4739-8e83-ebd2225757e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01710561s �[1mSTEP�[0m: Saw pod success Jan 17 15:17:28.634: INFO: Pod "pod-projected-configmaps-50009f0f-b942-4739-8e83-ebd2225757e0" satisfied condition "Succeeded or Failed" Jan 17 15:17:28.637: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9 pod pod-projected-configmaps-50009f0f-b942-4739-8e83-ebd2225757e0 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:17:28.658: INFO: Waiting for pod pod-projected-configmaps-50009f0f-b942-4739-8e83-ebd2225757e0 to disappear Jan 17 15:17:28.661: INFO: Pod pod-projected-configmaps-50009f0f-b942-4739-8e83-ebd2225757e0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:17:28.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8421" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":856,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:17:28.749: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:17:28.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-9067" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":42,"skipped":900,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:17:28.816: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service endpoint-test2 in namespace services-7638 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-7638 to expose endpoints map[] Jan 17 15:17:28.892: INFO: successfully validated that service endpoint-test2 in namespace services-7638 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-7638 Jan 17 15:17:28.937: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:17:30.941: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-7638 to expose endpoints map[pod1:[80]] Jan 17 15:17:30.960: INFO: successfully validated that service endpoint-test2 in namespace services-7638 exposes endpoints map[pod1:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod1 Jan 17 15:17:30.960: INFO: Creating new exec pod Jan 17 15:17:33.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7638 exec execpod27pv9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Jan 17 15:17:34.184: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Jan 17 15:17:34.184: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 17 15:17:34.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7638 exec execpod27pv9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.161.140 80' Jan 17 15:17:34.350: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.130.161.140 80\nConnection to 10.130.161.140 80 port [tcp/http] succeeded!\n" Jan 17 15:17:34.350: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Creating pod pod2 in namespace services-7638 Jan 17 15:17:34.360: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:17:36.365: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-7638 to expose endpoints map[pod1:[80] pod2:[80]] Jan 17 15:17:36.382: INFO: successfully validated that service endpoint-test2 in namespace services-7638 exposes endpoints map[pod1:[80] pod2:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod1 and pod2 Jan 17 15:17:37.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7638 exec execpod27pv9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Jan 17 15:17:37.587: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Jan 17 15:17:37.587: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 17 15:17:37.587: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7638 exec execpod27pv9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.161.140 80' Jan 17 15:17:37.768: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.130.161.140 80\nConnection to 10.130.161.140 80 port [tcp/http] succeeded!\n" Jan 17 15:17:37.769: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod1 in namespace services-7638 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-7638 to expose endpoints map[pod2:[80]] Jan 17 15:17:38.816: INFO: successfully validated that service endpoint-test2 in namespace services-7638 exposes endpoints map[pod2:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod2 Jan 17 15:17:39.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7638 exec execpod27pv9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Jan 17 15:17:39.986: INFO: stderr: "+ + ncecho -v hostName -t\n -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Jan 17 15:17:39.986: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 17 15:17:39.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-7638 exec execpod27pv9 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.130.161.140 80' Jan 17 15:17:40.147: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.130.161.140 80\nConnection to 10.130.161.140 80 port [tcp/http] succeeded!\n" Jan 17 15:17:40.147: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod2 in namespace services-7638 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-7638 to expose endpoints map[] Jan 17 15:17:40.188: INFO: successfully validated that service endpoint-test2 in namespace services-7638 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:17:40.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-7638" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":43,"skipped":905,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:17:40.324: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-downwardapi-zmjg �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Jan 17 15:17:40.412: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-zmjg" in namespace "subpath-8247" to be "Succeeded or Failed" Jan 17 15:17:40.416: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Pending", Reason="", readiness=false. Elapsed: 3.977214ms Jan 17 15:17:42.423: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Running", Reason="", readiness=true. Elapsed: 2.011355648s Jan 17 15:17:44.429: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Running", Reason="", readiness=true. Elapsed: 4.017665267s Jan 17 15:17:46.435: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Running", Reason="", readiness=true. Elapsed: 6.023572329s Jan 17 15:17:48.442: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Running", Reason="", readiness=true. Elapsed: 8.029993435s Jan 17 15:17:50.447: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Running", Reason="", readiness=true. Elapsed: 10.035114783s Jan 17 15:17:52.451: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Running", Reason="", readiness=true. Elapsed: 12.039691521s Jan 17 15:17:54.457: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Running", Reason="", readiness=true. Elapsed: 14.045191244s Jan 17 15:17:56.461: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Running", Reason="", readiness=true. Elapsed: 16.049134935s Jan 17 15:17:58.465: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Running", Reason="", readiness=true. Elapsed: 18.053277586s Jan 17 15:18:00.472: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Running", Reason="", readiness=true. Elapsed: 20.060014581s Jan 17 15:18:02.477: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Running", Reason="", readiness=false. Elapsed: 22.064805989s Jan 17 15:18:04.481: INFO: Pod "pod-subpath-test-downwardapi-zmjg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069686232s �[1mSTEP�[0m: Saw pod success Jan 17 15:18:04.482: INFO: Pod "pod-subpath-test-downwardapi-zmjg" satisfied condition "Succeeded or Failed" Jan 17 15:18:04.485: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9 pod pod-subpath-test-downwardapi-zmjg container test-container-subpath-downwardapi-zmjg: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:18:04.504: INFO: Waiting for pod pod-subpath-test-downwardapi-zmjg to disappear Jan 17 15:18:04.507: INFO: Pod pod-subpath-test-downwardapi-zmjg no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-downwardapi-zmjg Jan 17 15:18:04.507: INFO: Deleting pod "pod-subpath-test-downwardapi-zmjg" in namespace "subpath-8247" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:18:04.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-8247" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":44,"skipped":932,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:18:04.528: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption-release is created Jan 17 15:18:04.569: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:18:06.575: INFO: The status of Pod pod-adoption-release is Running (Ready = true) �[1mSTEP�[0m: When a replicaset with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted �[1mSTEP�[0m: When the matched label of one of its pods change Jan 17 15:18:07.598: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:18:08.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-9401" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":45,"skipped":940,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:18:08.636: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test override arguments Jan 17 15:18:08.671: INFO: Waiting up to 5m0s for pod "client-containers-a035639a-b6bd-4c20-ba10-9052547d0fe3" in namespace "containers-9598" to be "Succeeded or Failed" Jan 17 15:18:08.676: INFO: Pod "client-containers-a035639a-b6bd-4c20-ba10-9052547d0fe3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.764988ms Jan 17 15:18:10.688: INFO: Pod "client-containers-a035639a-b6bd-4c20-ba10-9052547d0fe3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016565267s Jan 17 15:18:12.694: INFO: Pod "client-containers-a035639a-b6bd-4c20-ba10-9052547d0fe3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022235308s �[1mSTEP�[0m: Saw pod success Jan 17 15:18:12.694: INFO: Pod "client-containers-a035639a-b6bd-4c20-ba10-9052547d0fe3" satisfied condition "Succeeded or Failed" Jan 17 15:18:12.698: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod client-containers-a035639a-b6bd-4c20-ba10-9052547d0fe3 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:18:12.782: INFO: Waiting for pod client-containers-a035639a-b6bd-4c20-ba10-9052547d0fe3 to disappear Jan 17 15:18:12.788: INFO: Pod client-containers-a035639a-b6bd-4c20-ba10-9052547d0fe3 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:18:12.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-9598" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":-1,"completed":46,"skipped":941,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:15:59.681: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-3565 �[1mSTEP�[0m: creating service affinity-clusterip-transition in namespace services-3565 �[1mSTEP�[0m: creating replication controller affinity-clusterip-transition in namespace services-3565 I0117 15:15:59.739886 23 runners.go:193] Created replication controller with name: affinity-clusterip-transition, namespace: services-3565, replica count: 3 I0117 15:16:02.791232 23 runners.go:193] affinity-clusterip-transition Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 17 15:16:02.800: INFO: Creating new exec pod Jan 17 15:16:05.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:08.029: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:08.029: INFO: stdout: "" Jan 17 15:16:09.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:11.221: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:11.221: INFO: stdout: "" Jan 17 15:16:12.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:14.251: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:14.251: INFO: stdout: "" Jan 17 15:16:15.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:17.235: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:17.235: INFO: stdout: "" Jan 17 15:16:18.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:20.268: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:20.268: INFO: stdout: "" Jan 17 15:16:21.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:23.241: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:23.241: INFO: stdout: "" Jan 17 15:16:24.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:26.212: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:26.212: INFO: stdout: "" Jan 17 15:16:27.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:29.305: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:29.305: INFO: stdout: "" Jan 17 15:16:30.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:32.278: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:32.278: INFO: stdout: "" Jan 17 15:16:33.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:35.312: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:35.312: INFO: stdout: "" Jan 17 15:16:36.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:38.246: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:38.246: INFO: stdout: "" Jan 17 15:16:39.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:41.273: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:41.274: INFO: stdout: "" Jan 17 15:16:42.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:44.212: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:44.212: INFO: stdout: "" Jan 17 15:16:45.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:47.204: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:47.204: INFO: stdout: "" Jan 17 15:16:48.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:50.278: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:50.278: INFO: stdout: "" Jan 17 15:16:51.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:53.244: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:53.244: INFO: stdout: "" Jan 17 15:16:54.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:56.200: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:56.200: INFO: stdout: "" Jan 17 15:16:57.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:16:59.206: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:16:59.206: INFO: stdout: "" Jan 17 15:17:00.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:02.249: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:02.249: INFO: stdout: "" Jan 17 15:17:03.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:05.223: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:05.223: INFO: stdout: "" Jan 17 15:17:06.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:08.187: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:08.187: INFO: stdout: "" Jan 17 15:17:09.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:11.196: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:11.196: INFO: stdout: "" Jan 17 15:17:12.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:14.231: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:14.231: INFO: stdout: "" Jan 17 15:17:15.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:17.227: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:17.227: INFO: stdout: "" Jan 17 15:17:18.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:20.193: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:20.193: INFO: stdout: "" Jan 17 15:17:21.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:23.217: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:23.217: INFO: stdout: "" Jan 17 15:17:24.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:26.208: INFO: stderr: "+ + echo hostName\nnc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:26.208: INFO: stdout: "" Jan 17 15:17:27.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:29.229: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:29.230: INFO: stdout: "" Jan 17 15:17:30.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:32.246: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip-transition 80\n+ echo hostName\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:32.246: INFO: stdout: "" Jan 17 15:17:33.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:35.262: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:35.262: INFO: stdout: "" Jan 17 15:17:36.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:38.195: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:38.195: INFO: stdout: "" Jan 17 15:17:39.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:41.241: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:41.241: INFO: stdout: "" Jan 17 15:17:42.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:44.227: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:44.227: INFO: stdout: "" Jan 17 15:17:45.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:47.219: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:47.219: INFO: stdout: "" Jan 17 15:17:48.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:50.203: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:50.203: INFO: stdout: "" Jan 17 15:17:51.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:53.189: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:53.189: INFO: stdout: "" Jan 17 15:17:54.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:56.187: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:56.187: INFO: stdout: "" Jan 17 15:17:57.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:17:59.186: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:17:59.186: INFO: stdout: "" Jan 17 15:18:00.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:18:02.203: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:18:02.203: INFO: stdout: "" Jan 17 15:18:03.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:18:05.230: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:18:05.231: INFO: stdout: "" Jan 17 15:18:06.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:18:08.205: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:18:08.206: INFO: stdout: "" Jan 17 15:18:08.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3565 exec execpod-affinityz6vvk -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip-transition 80' Jan 17 15:18:10.415: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip-transition 80\nConnection to affinity-clusterip-transition 80 port [tcp/http] succeeded!\n" Jan 17 15:18:10.415: INFO: stdout: "" Jan 17 15:18:10.415: FAIL: Unexpected error: <*errors.errorString | 0xc003a8e420>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x714959a, {0x7b06bd0, 0xc0036e4600}, 0xc0036fc500, 0x1) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 +0x669 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithTransition(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3262 k8s.io/kubernetes/test/e2e/network.glob..func24.26() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2099 +0x90 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000c66b60, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Jan 17 15:18:10.416: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip-transition in namespace services-3565, will wait for the garbage collector to delete the pods Jan 17 15:18:10.498: INFO: Deleting ReplicationController affinity-clusterip-transition took: 10.774267ms Jan 17 15:18:10.598: INFO: Terminating ReplicationController affinity-clusterip-transition pods took: 100.28904ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:18:12.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3565" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [133.149 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 17 15:18:10.415: Unexpected error: <*errors.errorString | 0xc003a8e420>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip-transition:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 �[90m------------------------------�[0m �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:13:37.041: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 17 15:17:21.363: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-9879/dns-test-eb7a0556-c3a1-48f8-b1c8-cbd45bc17d14: the server is currently unable to handle the request (get pods dns-test-eb7a0556-c3a1-48f8-b1c8-cbd45bc17d14) Jan 17 15:18:47.091: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-9879/dns-test-eb7a0556-c3a1-48f8-b1c8-cbd45bc17d14: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-9879/pods/dns-test-eb7a0556-c3a1-48f8-b1c8-cbd45bc17d14/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f3325e4a6a8, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d5088, 0xc000056080}, 0xc003057b58) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d5088, 0xc000056080}, 0x98, 0x2cc8045, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d5088, 0xc000056080}, 0x4a, 0xc003057be8, 0x2441ec7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78b5a40, 0xc00005c880, 0xc003057c30) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00382ca40, 0x4, 0x4}, {0x70d4db2, 0x7}, 0xc00338a800, {0x7b06bd0, 0xc00379c300}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00111ba20, 0xc00338a800, {0xc00382ca40, 0x4, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x43e k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00070eea0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a E0117 15:18:47.092019 17 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 17 15:18:47.091: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-9879/dns-test-eb7a0556-c3a1-48f8-b1c8-cbd45bc17d14: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-9879/pods/dns-test-eb7a0556-c3a1-48f8-b1c8-cbd45bc17d14/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:220, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f3325e4a6a8, 0x0})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d5088, 0xc000056080}, 0xc003057b58)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d5088, 0xc000056080}, 0x98, 0x2cc8045, 0x68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d5088, 0xc000056080}, 0x4a, 0xc003057be8, 0x2441ec7)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78b5a40, 0xc00005c880, 0xc003057c30)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50\nk8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00382ca40, 0x4, 0x4}, {0x70d4db2, 0x7}, 0xc00338a800, {0x7b06bd0, 0xc00379c300}, 0x0, {0x0, ...})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00111ba20, 0xc00338a800, {0xc00382ca40, 0x4, 0x4})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x43e\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697\nk8s.io/kubernetes/test/e2e.TestE2E(0x0)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc00070eea0, 0x735e880)\n\t/usr/local/go/src/testing/testing.go:1259 +0x102\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1306 +0x35a"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 136 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6c3d000, 0xc0037a6180}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00007c290}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75 panic({0x6c3d000, 0xc0037a6180}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x73 panic({0x62d5960, 0x78abec0}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc0009e46e0, 0x159}, {0xc0030575f0, 0x0, 0x40}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0009e46e0, 0x159}, {0xc0030576d0, 0x70cbf6a, 0xc0030576f8}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7 k8s.io/kubernetes/test/e2e/framework.Failf({0x717cde2, 0x2d}, {0xc003057940, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x131 k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x889 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f3325e4a6a8, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d5088, 0xc000056080}, 0xc003057b58) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d5088, 0xc000056080}, 0x98, 0x2cc8045, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d5088, 0xc000056080}, 0x4a, 0xc003057be8, 0x2441ec7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78b5a40, 0xc00005c880, 0xc003057c30) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00382ca40, 0x4, 0x4}, {0x70d4db2, 0x7}, 0xc00338a800, {0x7b06bd0, 0xc00379c300}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00111ba20, 0xc00338a800, {0xc00382ca40, 0x4, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x43e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00070f040) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xba k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0030595c8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x125 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00063ca50, 0xc003059990, {0x78b5a40, 0xc00005c880}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x2a9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00063ca50, {0x78b5a40, 0xc00005c880}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xe7 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00036e000, 0xc00063ca50) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0xe5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00036e000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x1a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00036e000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0xc5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00019a070, {0x7f332552c510, 0xc00070eea0}, {0x710baa9, 0x40}, {0xc0009ea420, 0x3, 0x3}, {0x7a2d318, 0xc00005c880}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x4d2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x78bc3a0, 0xc00070eea0}, {0x710baa9, 0x14}, {0xc00005d640, 0x3, 0x6}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x185 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x78bc3a0, 0xc00070eea0}, {0x710baa9, 0x14}, {0xc00007f780, 0x2, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0xf9 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00070eea0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:18:47.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-9879" for this suite. �[91m�[1m• Failure [310.075 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould provide DNS for the cluster [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 17 15:18:47.091: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-9879/dns-test-eb7a0556-c3a1-48f8-b1c8-cbd45bc17d14: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-9879/pods/dns-test-eb7a0556-c3a1-48f8-b1c8-cbd45bc17d14/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":6,"skipped":196,"failed":1,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:12:03.833: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Jan 17 15:12:03.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 create -f -' Jan 17 15:12:06.508: INFO: stderr: "" Jan 17 15:12:06.508: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 17 15:12:06.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:12:06.702: INFO: stderr: "" Jan 17 15:12:06.702: INFO: stdout: "update-demo-nautilus-bhzft update-demo-nautilus-dnd4n " Jan 17 15:12:06.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods update-demo-nautilus-bhzft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:12:06.941: INFO: stderr: "" Jan 17 15:12:06.941: INFO: stdout: "" Jan 17 15:12:06.941: INFO: update-demo-nautilus-bhzft is created but not running Jan 17 15:12:11.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:12:12.195: INFO: stderr: "" Jan 17 15:12:12.195: INFO: stdout: "update-demo-nautilus-bhzft update-demo-nautilus-dnd4n " Jan 17 15:12:12.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods update-demo-nautilus-bhzft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:12:12.425: INFO: stderr: "" Jan 17 15:12:12.425: INFO: stdout: "true" Jan 17 15:12:12.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods update-demo-nautilus-bhzft -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:12:12.686: INFO: stderr: "" Jan 17 15:12:12.687: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:12:12.687: INFO: validating pod update-demo-nautilus-bhzft Jan 17 15:12:12.700: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:12:12.701: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:12:12.701: INFO: update-demo-nautilus-bhzft is verified up and running Jan 17 15:12:12.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods update-demo-nautilus-dnd4n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:12:12.922: INFO: stderr: "" Jan 17 15:12:12.922: INFO: stdout: "true" Jan 17 15:12:12.922: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods update-demo-nautilus-dnd4n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:12:13.232: INFO: stderr: "" Jan 17 15:12:13.233: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:12:13.233: INFO: validating pod update-demo-nautilus-dnd4n Jan 17 15:15:47.155: INFO: update-demo-nautilus-dnd4n is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-dnd4n) Jan 17 15:15:52.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:15:52.288: INFO: stderr: "" Jan 17 15:15:52.288: INFO: stdout: "update-demo-nautilus-bhzft update-demo-nautilus-dnd4n " Jan 17 15:15:52.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods update-demo-nautilus-bhzft -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:15:52.463: INFO: stderr: "" Jan 17 15:15:52.463: INFO: stdout: "true" Jan 17 15:15:52.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods update-demo-nautilus-bhzft -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:15:52.566: INFO: stderr: "" Jan 17 15:15:52.566: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:15:52.566: INFO: validating pod update-demo-nautilus-bhzft Jan 17 15:15:52.574: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:15:52.574: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:15:52.574: INFO: update-demo-nautilus-bhzft is verified up and running Jan 17 15:15:52.574: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods update-demo-nautilus-dnd4n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:15:52.682: INFO: stderr: "" Jan 17 15:15:52.682: INFO: stdout: "true" Jan 17 15:15:52.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods update-demo-nautilus-dnd4n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:15:52.785: INFO: stderr: "" Jan 17 15:15:52.785: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:15:52.785: INFO: validating pod update-demo-nautilus-dnd4n Jan 17 15:19:26.291: INFO: update-demo-nautilus-dnd4n is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-dnd4n) Jan 17 15:19:31.293: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 +0x22f k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0007f6680, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: using delete to clean up resources Jan 17 15:19:31.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 delete --grace-period=0 --force -f -' Jan 17 15:19:31.402: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 17 15:19:31.402: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 17 15:19:31.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get rc,svc -l name=update-demo --no-headers' Jan 17 15:19:31.689: INFO: stderr: "No resources found in kubectl-9607 namespace.\n" Jan 17 15:19:31.689: INFO: stdout: "" Jan 17 15:19:31.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9607 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 17 15:19:31.917: INFO: stderr: "" Jan 17 15:19:31.918: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:19:31.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9607" for this suite. �[91m�[1m• Failure [448.106 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294�[0m �[91m�[1mshould scale a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 17 15:19:31.294: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:18:12.834: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with configMap that has name projected-configmap-test-upd-0b0de9a6-16d1-4fb6-8cd8-f2665c49ac17 �[1mSTEP�[0m: Creating the pod Jan 17 15:18:12.886: INFO: The status of Pod pod-projected-configmaps-d84e2105-fe4c-4f06-bdaf-5e7d159c2093 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:18:14.890: INFO: The status of Pod pod-projected-configmaps-d84e2105-fe4c-4f06-bdaf-5e7d159c2093 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap projected-configmap-test-upd-0b0de9a6-16d1-4fb6-8cd8-f2665c49ac17 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:19:35.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-561" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":960,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:19:35.329: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-ed5a8bdc-a0d7-4ab6-aeaa-fa4a18304e0a �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 17 15:19:35.386: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d5868a88-c1fc-4dd0-80da-bc884c6cd2f0" in namespace "projected-8149" to be "Succeeded or Failed" Jan 17 15:19:35.395: INFO: Pod "pod-projected-secrets-d5868a88-c1fc-4dd0-80da-bc884c6cd2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.563257ms Jan 17 15:19:37.401: INFO: Pod "pod-projected-secrets-d5868a88-c1fc-4dd0-80da-bc884c6cd2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014980308s Jan 17 15:19:39.410: INFO: Pod "pod-projected-secrets-d5868a88-c1fc-4dd0-80da-bc884c6cd2f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023703847s �[1mSTEP�[0m: Saw pod success Jan 17 15:19:39.410: INFO: Pod "pod-projected-secrets-d5868a88-c1fc-4dd0-80da-bc884c6cd2f0" satisfied condition "Succeeded or Failed" Jan 17 15:19:39.414: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod pod-projected-secrets-d5868a88-c1fc-4dd0-80da-bc884c6cd2f0 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:19:39.433: INFO: Waiting for pod pod-projected-secrets-d5868a88-c1fc-4dd0-80da-bc884c6cd2f0 to disappear Jan 17 15:19:39.436: INFO: Pod pod-projected-secrets-d5868a88-c1fc-4dd0-80da-bc884c6cd2f0 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:19:39.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-8149" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":961,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":6,"skipped":196,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:19:31.952: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Jan 17 15:19:31.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 create -f -' Jan 17 15:19:32.408: INFO: stderr: "" Jan 17 15:19:32.408: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 17 15:19:32.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:19:32.588: INFO: stderr: "" Jan 17 15:19:32.588: INFO: stdout: "update-demo-nautilus-9f5dj update-demo-nautilus-jjk6t " Jan 17 15:19:32.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-9f5dj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:19:32.738: INFO: stderr: "" Jan 17 15:19:32.738: INFO: stdout: "" Jan 17 15:19:32.739: INFO: update-demo-nautilus-9f5dj is created but not running Jan 17 15:19:37.739: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:19:37.817: INFO: stderr: "" Jan 17 15:19:37.817: INFO: stdout: "update-demo-nautilus-9f5dj update-demo-nautilus-jjk6t " Jan 17 15:19:37.817: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-9f5dj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:19:37.885: INFO: stderr: "" Jan 17 15:19:37.885: INFO: stdout: "true" Jan 17 15:19:37.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-9f5dj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:19:37.960: INFO: stderr: "" Jan 17 15:19:37.960: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:19:37.960: INFO: validating pod update-demo-nautilus-9f5dj Jan 17 15:19:37.965: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:19:37.965: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:19:37.965: INFO: update-demo-nautilus-9f5dj is verified up and running Jan 17 15:19:37.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-jjk6t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:19:38.051: INFO: stderr: "" Jan 17 15:19:38.051: INFO: stdout: "true" Jan 17 15:19:38.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-jjk6t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:19:38.127: INFO: stderr: "" Jan 17 15:19:38.127: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:19:38.127: INFO: validating pod update-demo-nautilus-jjk6t Jan 17 15:19:38.133: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:19:38.133: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:19:38.133: INFO: update-demo-nautilus-jjk6t is verified up and running �[1mSTEP�[0m: scaling down the replication controller Jan 17 15:19:38.135: INFO: scanned /root for discovery docs: <nil> Jan 17 15:19:38.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Jan 17 15:19:39.235: INFO: stderr: "" Jan 17 15:19:39.235: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 17 15:19:39.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:19:39.319: INFO: stderr: "" Jan 17 15:19:39.319: INFO: stdout: "update-demo-nautilus-9f5dj update-demo-nautilus-jjk6t " �[1mSTEP�[0m: Replicas for name=update-demo: expected=1 actual=2 Jan 17 15:19:44.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:19:44.392: INFO: stderr: "" Jan 17 15:19:44.392: INFO: stdout: "update-demo-nautilus-jjk6t " Jan 17 15:19:44.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-jjk6t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:19:44.463: INFO: stderr: "" Jan 17 15:19:44.463: INFO: stdout: "true" Jan 17 15:19:44.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-jjk6t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:19:44.548: INFO: stderr: "" Jan 17 15:19:44.548: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:19:44.548: INFO: validating pod update-demo-nautilus-jjk6t Jan 17 15:19:44.552: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:19:44.552: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:19:44.552: INFO: update-demo-nautilus-jjk6t is verified up and running �[1mSTEP�[0m: scaling up the replication controller Jan 17 15:19:44.554: INFO: scanned /root for discovery docs: <nil> Jan 17 15:19:44.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Jan 17 15:19:45.656: INFO: stderr: "" Jan 17 15:19:45.656: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Jan 17 15:19:45.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:19:45.745: INFO: stderr: "" Jan 17 15:19:45.745: INFO: stdout: "update-demo-nautilus-hhjd4 update-demo-nautilus-jjk6t " Jan 17 15:19:45.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-hhjd4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:19:45.834: INFO: stderr: "" Jan 17 15:19:45.834: INFO: stdout: "" Jan 17 15:19:45.834: INFO: update-demo-nautilus-hhjd4 is created but not running Jan 17 15:19:50.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Jan 17 15:19:50.916: INFO: stderr: "" Jan 17 15:19:50.917: INFO: stdout: "update-demo-nautilus-hhjd4 update-demo-nautilus-jjk6t " Jan 17 15:19:50.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-hhjd4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:19:50.992: INFO: stderr: "" Jan 17 15:19:50.992: INFO: stdout: "true" Jan 17 15:19:50.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-hhjd4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:19:51.063: INFO: stderr: "" Jan 17 15:19:51.063: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:19:51.063: INFO: validating pod update-demo-nautilus-hhjd4 Jan 17 15:19:51.069: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:19:51.069: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:19:51.069: INFO: update-demo-nautilus-hhjd4 is verified up and running Jan 17 15:19:51.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-jjk6t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Jan 17 15:19:51.144: INFO: stderr: "" Jan 17 15:19:51.144: INFO: stdout: "true" Jan 17 15:19:51.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods update-demo-nautilus-jjk6t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Jan 17 15:19:51.220: INFO: stderr: "" Jan 17 15:19:51.220: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Jan 17 15:19:51.220: INFO: validating pod update-demo-nautilus-jjk6t Jan 17 15:19:51.224: INFO: got data: { "image": "nautilus.jpg" } Jan 17 15:19:51.224: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 17 15:19:51.224: INFO: update-demo-nautilus-jjk6t is verified up and running �[1mSTEP�[0m: using delete to clean up resources Jan 17 15:19:51.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 delete --grace-period=0 --force -f -' Jan 17 15:19:51.306: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 17 15:19:51.306: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 17 15:19:51.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get rc,svc -l name=update-demo --no-headers' Jan 17 15:19:51.431: INFO: stderr: "No resources found in kubectl-6918 namespace.\n" Jan 17 15:19:51.432: INFO: stdout: "" Jan 17 15:19:51.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6918 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 17 15:19:51.555: INFO: stderr: "" Jan 17 15:19:51.555: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:19:51.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-6918" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":7,"skipped":196,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:19:51.625: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 17 15:19:51.672: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d4ee3dc7-27a3-49e8-9687-4142e1943173" in namespace "downward-api-65" to be "Succeeded or Failed" Jan 17 15:19:51.678: INFO: Pod "downwardapi-volume-d4ee3dc7-27a3-49e8-9687-4142e1943173": Phase="Pending", Reason="", readiness=false. Elapsed: 5.505414ms Jan 17 15:19:53.683: INFO: Pod "downwardapi-volume-d4ee3dc7-27a3-49e8-9687-4142e1943173": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011291594s Jan 17 15:19:55.689: INFO: Pod "downwardapi-volume-d4ee3dc7-27a3-49e8-9687-4142e1943173": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017247539s �[1mSTEP�[0m: Saw pod success Jan 17 15:19:55.689: INFO: Pod "downwardapi-volume-d4ee3dc7-27a3-49e8-9687-4142e1943173" satisfied condition "Succeeded or Failed" Jan 17 15:19:55.694: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod downwardapi-volume-d4ee3dc7-27a3-49e8-9687-4142e1943173 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:19:55.726: INFO: Waiting for pod downwardapi-volume-d4ee3dc7-27a3-49e8-9687-4142e1943173 to disappear Jan 17 15:19:55.730: INFO: Pod downwardapi-volume-d4ee3dc7-27a3-49e8-9687-4142e1943173 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:19:55.730: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-65" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":228,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:19:39.507: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod liveness-c948fea4-115e-414f-b01e-fdbd1f8663ef in namespace container-probe-6514 Jan 17 15:19:41.554: INFO: Started pod liveness-c948fea4-115e-414f-b01e-fdbd1f8663ef in namespace container-probe-6514 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Jan 17 15:19:41.561: INFO: Initial restart count of pod liveness-c948fea4-115e-414f-b01e-fdbd1f8663ef is 0 Jan 17 15:20:01.633: INFO: Restart count of pod container-probe-6514/liveness-c948fea4-115e-414f-b01e-fdbd1f8663ef is now 1 (20.071763433s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:01.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-6514" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":-1,"completed":49,"skipped":998,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:01.711: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 17 15:20:01.776: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8c099cdd-018a-4a25-acbe-3b06660b30b6" in namespace "downward-api-5837" to be "Succeeded or Failed" Jan 17 15:20:01.783: INFO: Pod "downwardapi-volume-8c099cdd-018a-4a25-acbe-3b06660b30b6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.803336ms Jan 17 15:20:03.789: INFO: Pod "downwardapi-volume-8c099cdd-018a-4a25-acbe-3b06660b30b6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011991947s Jan 17 15:20:05.794: INFO: Pod "downwardapi-volume-8c099cdd-018a-4a25-acbe-3b06660b30b6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017259534s �[1mSTEP�[0m: Saw pod success Jan 17 15:20:05.794: INFO: Pod "downwardapi-volume-8c099cdd-018a-4a25-acbe-3b06660b30b6" satisfied condition "Succeeded or Failed" Jan 17 15:20:05.799: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod downwardapi-volume-8c099cdd-018a-4a25-acbe-3b06660b30b6 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:20:05.819: INFO: Waiting for pod downwardapi-volume-8c099cdd-018a-4a25-acbe-3b06660b30b6 to disappear Jan 17 15:20:05.822: INFO: Pod downwardapi-volume-8c099cdd-018a-4a25-acbe-3b06660b30b6 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:05.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5837" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1012,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:05.842: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Given a ReplicationController is created �[1mSTEP�[0m: When the matched label of one of its pods change Jan 17 15:20:05.883: INFO: Pod name pod-release: Found 0 pods out of 1 Jan 17 15:20:10.889: INFO: Pod name pod-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:11.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-1717" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":51,"skipped":1018,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:19:55.754: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: set up a multi version CRD Jan 17 15:19:55.784: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: rename a version �[1mSTEP�[0m: check the new version name is served �[1mSTEP�[0m: check the old version name is removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:13.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-9468" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":-1,"completed":9,"skipped":235,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:13.784: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a watch on configmaps with a certain label �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: changing the label value of the configmap �[1mSTEP�[0m: Expecting to observe a delete notification for the watched object Jan 17 15:20:13.836: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7049 c7858af0-ea40-4140-955d-f5e8d8222b71 8522 0 2023-01-17 15:20:13 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-17 15:20:13 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Jan 17 15:20:13.837: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7049 c7858af0-ea40-4140-955d-f5e8d8222b71 8524 0 2023-01-17 15:20:13 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-17 15:20:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 17 15:20:13.837: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7049 c7858af0-ea40-4140-955d-f5e8d8222b71 8525 0 2023-01-17 15:20:13 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-17 15:20:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time �[1mSTEP�[0m: Expecting not to observe a notification because the object no longer meets the selector's requirements �[1mSTEP�[0m: changing the label value of the configmap back �[1mSTEP�[0m: modifying the configmap a third time �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe an add notification for the watched object when the label value was restored Jan 17 15:20:23.879: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7049 c7858af0-ea40-4140-955d-f5e8d8222b71 8582 0 2023-01-17 15:20:13 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-17 15:20:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 17 15:20:23.879: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7049 c7858af0-ea40-4140-955d-f5e8d8222b71 8583 0 2023-01-17 15:20:13 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-17 15:20:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Jan 17 15:20:23.879: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-7049 c7858af0-ea40-4140-955d-f5e8d8222b71 8584 0 2023-01-17 15:20:13 +0000 UTC <nil> <nil> map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2023-01-17 15:20:13 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:23.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-7049" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":-1,"completed":10,"skipped":251,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:23.942: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test env composition Jan 17 15:20:23.992: INFO: Waiting up to 5m0s for pod "var-expansion-fa995e8b-3218-43aa-8c63-158b4f19c290" in namespace "var-expansion-9596" to be "Succeeded or Failed" Jan 17 15:20:23.998: INFO: Pod "var-expansion-fa995e8b-3218-43aa-8c63-158b4f19c290": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097826ms Jan 17 15:20:26.002: INFO: Pod "var-expansion-fa995e8b-3218-43aa-8c63-158b4f19c290": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010256713s Jan 17 15:20:28.007: INFO: Pod "var-expansion-fa995e8b-3218-43aa-8c63-158b4f19c290": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014901553s �[1mSTEP�[0m: Saw pod success Jan 17 15:20:28.007: INFO: Pod "var-expansion-fa995e8b-3218-43aa-8c63-158b4f19c290" satisfied condition "Succeeded or Failed" Jan 17 15:20:28.011: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod var-expansion-fa995e8b-3218-43aa-8c63-158b4f19c290 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:20:28.041: INFO: Waiting for pod var-expansion-fa995e8b-3218-43aa-8c63-158b4f19c290 to disappear Jan 17 15:20:28.046: INFO: Pod var-expansion-fa995e8b-3218-43aa-8c63-158b4f19c290 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:28.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-9596" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":273,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:28.070: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename hostport �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/hostport.go:47 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Trying to create a pod(pod1) with hostport 54323 and hostIP 127.0.0.1 and expect scheduled Jan 17 15:20:28.128: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:20:30.135: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: Trying to create another pod(pod2) with hostport 54323 but hostIP 172.18.0.4 on the node which pod1 resides and expect scheduled Jan 17 15:20:30.147: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:20:32.152: INFO: The status of Pod pod2 is Running (Ready = false) Jan 17 15:20:34.154: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: Trying to create a third pod(pod3) with hostport 54323, hostIP 172.18.0.4 but use UDP protocol on the node which pod2 resides Jan 17 15:20:34.168: INFO: The status of Pod pod3 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:20:36.173: INFO: The status of Pod pod3 is Running (Ready = true) Jan 17 15:20:36.184: INFO: The status of Pod e2e-host-exec is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:20:38.189: INFO: The status of Pod e2e-host-exec is Running (Ready = true) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 127.0.0.1, port: 54323 Jan 17 15:20:38.193: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 --interface 172.18.0.4 http://127.0.0.1:54323/hostname] Namespace:hostport-9992 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 17 15:20:38.193: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 17 15:20:38.194: INFO: ExecWithOptions: Clientset creation Jan 17 15:20:38.194: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-9992/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+--interface+172.18.0.4+http%3A%2F%2F127.0.0.1%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.4, port: 54323 Jan 17 15:20:38.277: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g --connect-timeout 5 http://172.18.0.4:54323/hostname] Namespace:hostport-9992 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 17 15:20:38.277: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 17 15:20:38.278: INFO: ExecWithOptions: Clientset creation Jan 17 15:20:38.278: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-9992/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+--connect-timeout+5+http%3A%2F%2F172.18.0.4%3A54323%2Fhostname&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) �[1mSTEP�[0m: checking connectivity from pod e2e-host-exec to serverIP: 172.18.0.4, port: 54323 UDP Jan 17 15:20:38.348: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vuz -w 5 172.18.0.4 54323] Namespace:hostport-9992 PodName:e2e-host-exec ContainerName:e2e-host-exec Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Jan 17 15:20:38.348: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 17 15:20:38.348: INFO: ExecWithOptions: Clientset creation Jan 17 15:20:38.348: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/hostport-9992/pods/e2e-host-exec/exec?command=%2Fbin%2Fsh&command=-c&command=nc+-vuz+-w+5+172.18.0.4+54323&container=e2e-host-exec&container=e2e-host-exec&stderr=true&stdout=true %!s(MISSING)) [AfterEach] [sig-network] HostPort /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:43.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "hostport-9992" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":-1,"completed":12,"skipped":278,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:43.497: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 17 15:20:43.536: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3334dfea-8794-4d7f-90dd-5f1570d644e2" in namespace "projected-9311" to be "Succeeded or Failed" Jan 17 15:20:43.540: INFO: Pod "downwardapi-volume-3334dfea-8794-4d7f-90dd-5f1570d644e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.002403ms Jan 17 15:20:45.546: INFO: Pod "downwardapi-volume-3334dfea-8794-4d7f-90dd-5f1570d644e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010116724s Jan 17 15:20:47.551: INFO: Pod "downwardapi-volume-3334dfea-8794-4d7f-90dd-5f1570d644e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014448799s �[1mSTEP�[0m: Saw pod success Jan 17 15:20:47.551: INFO: Pod "downwardapi-volume-3334dfea-8794-4d7f-90dd-5f1570d644e2" satisfied condition "Succeeded or Failed" Jan 17 15:20:47.554: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod downwardapi-volume-3334dfea-8794-4d7f-90dd-5f1570d644e2 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:20:47.571: INFO: Waiting for pod downwardapi-volume-3334dfea-8794-4d7f-90dd-5f1570d644e2 to disappear Jan 17 15:20:47.575: INFO: Pod downwardapi-volume-3334dfea-8794-4d7f-90dd-5f1570d644e2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:47.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9311" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":324,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:47.640: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name projected-secret-test-4a0c08a5-dd0e-47bd-bc08-fe8c33903954 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 17 15:20:47.681: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d43f2eed-f73d-4826-aed0-2c2efc3f3281" in namespace "projected-2616" to be "Succeeded or Failed" Jan 17 15:20:47.685: INFO: Pod "pod-projected-secrets-d43f2eed-f73d-4826-aed0-2c2efc3f3281": Phase="Pending", Reason="", readiness=false. Elapsed: 3.234135ms Jan 17 15:20:49.689: INFO: Pod "pod-projected-secrets-d43f2eed-f73d-4826-aed0-2c2efc3f3281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008121919s Jan 17 15:20:51.693: INFO: Pod "pod-projected-secrets-d43f2eed-f73d-4826-aed0-2c2efc3f3281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012050365s �[1mSTEP�[0m: Saw pod success Jan 17 15:20:51.693: INFO: Pod "pod-projected-secrets-d43f2eed-f73d-4826-aed0-2c2efc3f3281" satisfied condition "Succeeded or Failed" Jan 17 15:20:51.696: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod pod-projected-secrets-d43f2eed-f73d-4826-aed0-2c2efc3f3281 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:20:51.714: INFO: Waiting for pod pod-projected-secrets-d43f2eed-f73d-4826-aed0-2c2efc3f3281 to disappear Jan 17 15:20:51.716: INFO: Pod pod-projected-secrets-d43f2eed-f73d-4826-aed0-2c2efc3f3281 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:51.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2616" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":356,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:51.744: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Jan 17 15:20:51.783: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Jan 17 15:20:51.788: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 17 15:20:51.808: INFO: waiting for watch events with expected annotations Jan 17 15:20:51.809: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:51.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-2793" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":15,"skipped":370,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:51.874: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should test the lifecycle of an Endpoint [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating an Endpoint �[1mSTEP�[0m: waiting for available Endpoint �[1mSTEP�[0m: listing all Endpoints �[1mSTEP�[0m: updating the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: patching the Endpoint �[1mSTEP�[0m: fetching the Endpoint �[1mSTEP�[0m: deleting the Endpoint by Collection �[1mSTEP�[0m: waiting for Endpoint deletion �[1mSTEP�[0m: fetching the Endpoint [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:51.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5545" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":-1,"completed":16,"skipped":384,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:51.957: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-02841a56-7966-4382-b4d6-cdf93039d528 �[1mSTEP�[0m: Creating the pod Jan 17 15:20:52.001: INFO: The status of Pod pod-configmaps-e913a66d-7507-4250-af6e-96c135ef8734 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:20:54.007: INFO: The status of Pod pod-configmaps-e913a66d-7507-4250-af6e-96c135ef8734 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap configmap-test-upd-02841a56-7966-4382-b4d6-cdf93039d528 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:20:56.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-1356" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":385,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:56.066: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-79201607-b3d5-4291-8c4e-838cb276e5ca �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 17 15:20:56.105: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-29ccedba-539b-4d6b-8936-4a50e73f2cb0" in namespace "projected-9897" to be "Succeeded or Failed" Jan 17 15:20:56.109: INFO: Pod "pod-projected-configmaps-29ccedba-539b-4d6b-8936-4a50e73f2cb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.769324ms Jan 17 15:20:58.114: INFO: Pod "pod-projected-configmaps-29ccedba-539b-4d6b-8936-4a50e73f2cb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008103407s Jan 17 15:21:00.120: INFO: Pod "pod-projected-configmaps-29ccedba-539b-4d6b-8936-4a50e73f2cb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013809428s �[1mSTEP�[0m: Saw pod success Jan 17 15:21:00.120: INFO: Pod "pod-projected-configmaps-29ccedba-539b-4d6b-8936-4a50e73f2cb0" satisfied condition "Succeeded or Failed" Jan 17 15:21:00.124: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod pod-projected-configmaps-29ccedba-539b-4d6b-8936-4a50e73f2cb0 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:21:00.143: INFO: Waiting for pod pod-projected-configmaps-29ccedba-539b-4d6b-8936-4a50e73f2cb0 to disappear Jan 17 15:21:00.147: INFO: Pod pod-projected-configmaps-29ccedba-539b-4d6b-8936-4a50e73f2cb0 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:00.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9897" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":398,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:00.181: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:21:00.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7167 version' Jan 17 15:21:00.289: INFO: stderr: "" Jan 17 15:21:00.289: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.15\", GitCommit:\"b84cb8ab29366daa1bba65bc67f54de2f6c34848\", GitTreeState:\"clean\", BuildDate:\"2022-12-08T10:49:13Z\", GoVersion:\"go1.17.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.15\", GitCommit:\"d34db33f\", GitTreeState:\"clean\", BuildDate:\"2022-12-20T18:31:45Z\", GoVersion:\"go1.17.13\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:00.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7167" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":19,"skipped":413,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:00.371: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:04.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-5175" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":20,"skipped":448,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:04.437: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:21:04.487: INFO: The status of Pod busybox-readonly-fs0a3a4e48-f1ac-454b-8e57-6719f0bf35b9 is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:21:06.492: INFO: The status of Pod busybox-readonly-fs0a3a4e48-f1ac-454b-8e57-6719f0bf35b9 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:06.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-2449" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":451,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:06.566: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 17 15:21:07.275: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 17 15:21:10.295: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a mutating webhook configuration �[1mSTEP�[0m: Updating a mutating webhook configuration's rules to not include the create operation �[1mSTEP�[0m: Creating a configMap that should not be mutated �[1mSTEP�[0m: Patching a mutating webhook configuration's rules to include the create operation �[1mSTEP�[0m: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:10.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3355" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3355-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":-1,"completed":22,"skipped":486,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:10.512: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation Jan 17 15:21:10.557: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 17 15:21:13.037: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:24.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-4428" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":-1,"completed":23,"skipped":506,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:24.155: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 17 15:21:24.196: INFO: Waiting up to 5m0s for pod "downward-api-06cacb8f-d2ea-41cf-a990-37a6380cf220" in namespace "downward-api-342" to be "Succeeded or Failed" Jan 17 15:21:24.201: INFO: Pod "downward-api-06cacb8f-d2ea-41cf-a990-37a6380cf220": Phase="Pending", Reason="", readiness=false. Elapsed: 4.472862ms Jan 17 15:21:26.209: INFO: Pod "downward-api-06cacb8f-d2ea-41cf-a990-37a6380cf220": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013196832s Jan 17 15:21:28.215: INFO: Pod "downward-api-06cacb8f-d2ea-41cf-a990-37a6380cf220": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018654932s �[1mSTEP�[0m: Saw pod success Jan 17 15:21:28.215: INFO: Pod "downward-api-06cacb8f-d2ea-41cf-a990-37a6380cf220" satisfied condition "Succeeded or Failed" Jan 17 15:21:28.219: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod downward-api-06cacb8f-d2ea-41cf-a990-37a6380cf220 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:21:28.237: INFO: Waiting for pod downward-api-06cacb8f-d2ea-41cf-a990-37a6380cf220 to disappear Jan 17 15:21:28.242: INFO: Pod downward-api-06cacb8f-d2ea-41cf-a990-37a6380cf220 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:28.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-342" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":516,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:28.393: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a Service �[1mSTEP�[0m: Creating a NodePort Service �[1mSTEP�[0m: Not allowing a LoadBalancer Service with NodePort to be created that exceeds remaining quota �[1mSTEP�[0m: Ensuring resource quota status captures service creation �[1mSTEP�[0m: Deleting Services �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:39.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-8177" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":-1,"completed":25,"skipped":617,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:39.696: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-e1bf20fa-05ce-4fbb-bce0-f673a7b54391 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 17 15:21:39.764: INFO: Waiting up to 5m0s for pod "pod-secrets-bd7f171c-64d5-41c8-ac39-1a42b95e0abc" in namespace "secrets-8043" to be "Succeeded or Failed" Jan 17 15:21:39.769: INFO: Pod "pod-secrets-bd7f171c-64d5-41c8-ac39-1a42b95e0abc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.602433ms Jan 17 15:21:41.774: INFO: Pod "pod-secrets-bd7f171c-64d5-41c8-ac39-1a42b95e0abc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009753862s Jan 17 15:21:43.782: INFO: Pod "pod-secrets-bd7f171c-64d5-41c8-ac39-1a42b95e0abc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018101842s �[1mSTEP�[0m: Saw pod success Jan 17 15:21:43.782: INFO: Pod "pod-secrets-bd7f171c-64d5-41c8-ac39-1a42b95e0abc" satisfied condition "Succeeded or Failed" Jan 17 15:21:43.788: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9 pod pod-secrets-bd7f171c-64d5-41c8-ac39-1a42b95e0abc container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:21:43.830: INFO: Waiting for pod pod-secrets-bd7f171c-64d5-41c8-ac39-1a42b95e0abc to disappear Jan 17 15:21:43.835: INFO: Pod pod-secrets-bd7f171c-64d5-41c8-ac39-1a42b95e0abc no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:43.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8043" for this suite. �[1mSTEP�[0m: Destroying namespace "secret-namespace-5328" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":641,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:43.873: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:46.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-7902" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":-1,"completed":27,"skipped":644,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:46.097: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:21:46.132: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 17 15:21:48.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3400 --namespace=crd-publish-openapi-3400 create -f -' Jan 17 15:21:49.417: INFO: stderr: "" Jan 17 15:21:49.417: INFO: stdout: "e2e-test-crd-publish-openapi-8711-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 17 15:21:49.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3400 --namespace=crd-publish-openapi-3400 delete e2e-test-crd-publish-openapi-8711-crds test-cr' Jan 17 15:21:49.509: INFO: stderr: "" Jan 17 15:21:49.509: INFO: stdout: "e2e-test-crd-publish-openapi-8711-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jan 17 15:21:49.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3400 --namespace=crd-publish-openapi-3400 apply -f -' Jan 17 15:21:49.736: INFO: stderr: "" Jan 17 15:21:49.736: INFO: stdout: "e2e-test-crd-publish-openapi-8711-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jan 17 15:21:49.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3400 --namespace=crd-publish-openapi-3400 delete e2e-test-crd-publish-openapi-8711-crds test-cr' Jan 17 15:21:49.827: INFO: stderr: "" Jan 17 15:21:49.827: INFO: stdout: "e2e-test-crd-publish-openapi-8711-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Jan 17 15:21:49.827: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-3400 explain e2e-test-crd-publish-openapi-8711-crds' Jan 17 15:21:50.033: INFO: stderr: "" Jan 17 15:21:50.033: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-8711-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:52.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-3400" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":28,"skipped":658,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:52.527: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support proportional scaling [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Jan 17 15:21:52.561: INFO: Creating deployment "webserver-deployment" Jan 17 15:21:52.571: INFO: Waiting for observed generation 1 Jan 17 15:21:54.581: INFO: Waiting for all required pods to come up Jan 17 15:21:54.586: INFO: Pod name httpd: Found 10 pods out of 10 �[1mSTEP�[0m: ensuring each pod is running Jan 17 15:21:54.587: INFO: Waiting for deployment "webserver-deployment" to complete Jan 17 15:21:54.595: INFO: Updating deployment "webserver-deployment" with a non-existent image Jan 17 15:21:54.604: INFO: Updating deployment webserver-deployment Jan 17 15:21:54.604: INFO: Waiting for observed generation 2 Jan 17 15:21:56.611: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jan 17 15:21:56.615: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jan 17 15:21:56.619: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 17 15:21:56.631: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jan 17 15:21:56.631: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jan 17 15:21:56.640: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Jan 17 15:21:56.648: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Jan 17 15:21:56.648: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Jan 17 15:21:56.662: INFO: Updating deployment webserver-deployment Jan 17 15:21:56.663: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Jan 17 15:21:56.670: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jan 17 15:21:56.676: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Jan 17 15:21:58.690: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-1196 6c76b07e-3ec2-4d67-b2d1-6067e6a19453 9656 3 2023-01-17 15:21:52 +0000 UTC <nil> <nil> map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2023-01-17 15:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004ebd488 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:9,UnavailableReplicas:24,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2023-01-17 15:21:56 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-566f96c878" is progressing.,LastUpdateTime:2023-01-17 15:21:58 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,},},ReadyReplicas:9,CollisionCount:nil,},} Jan 17 15:21:58.694: INFO: New ReplicaSet "webserver-deployment-566f96c878" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-566f96c878 deployment-1196 48ec0db1-f578-4783-b51a-2308b2f362aa 9569 3 2023-01-17 15:21:54 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 6c76b07e-3ec2-4d67-b2d1-6067e6a19453 0xc004f1efd7 0xc004f1efd8}] [] [{kube-controller-manager Update apps/v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6c76b07e-3ec2-4d67-b2d1-6067e6a19453\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 566f96c878,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004f1f088 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jan 17 15:21:58.694: INFO: All old ReplicaSets of Deployment "webserver-deployment": Jan 17 15:21:58.694: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-5d9fdcc779 deployment-1196 f378b5b8-b0ba-4249-b439-a77b949f05f2 9655 3 2023-01-17 15:21:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 6c76b07e-3ec2-4d67-b2d1-6067e6a19453 0xc004f1f0e7 0xc004f1f0e8}] [] [{kube-controller-manager Update apps/v1 2023-01-17 15:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"6c76b07e-3ec2-4d67-b2d1-6067e6a19453\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2023-01-17 15:21:53 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 5d9fdcc779,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004f1f178 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:9,AvailableReplicas:9,Conditions:[]ReplicaSetCondition{},},} Jan 17 15:21:58.708: INFO: Pod "webserver-deployment-566f96c878-25xdt" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-25xdt webserver-deployment-566f96c878- deployment-1196 7f2cb344-907e-48c5-a988-cb379ae6a1ec 9474 0 2023-01-17 15:21:54 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc004ebd900 0xc004ebd901}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.38\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wmcbw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wmcbw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-dueiad,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.38,StartTime:2023-01-17 15:21:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.38,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.708: INFO: Pod "webserver-deployment-566f96c878-2jtmv" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-2jtmv webserver-deployment-566f96c878- deployment-1196 8f99188b-b659-4aae-bad1-2ee25288d82c 9586 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc004ebdb70 0xc004ebdb71}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bk7nf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bk7nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.709: INFO: Pod "webserver-deployment-566f96c878-8w6nd" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-8w6nd webserver-deployment-566f96c878- deployment-1196 0bb8a285-f0b0-4f20-b073-651e2d8062a8 9617 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc004ebdd90 0xc004ebdd91}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-kdx6j,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kdx6j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-dueiad,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.709: INFO: Pod "webserver-deployment-566f96c878-bh466" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-bh466 webserver-deployment-566f96c878- deployment-1196 42082cd9-43c4-4771-907e-00e1d6e60700 9468 0 2023-01-17 15:21:54 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc004ebdfb0 0xc004ebdfb1}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:55 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.44\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-4c488,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4c488,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.44,StartTime:2023-01-17 15:21:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.709: INFO: Pod "webserver-deployment-566f96c878-bxd2x" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-bxd2x webserver-deployment-566f96c878- deployment-1196 fbdc5d2e-4654-434a-bf93-7bad6702a9e7 9568 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc0051d61e0 0xc0051d61e1}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hnbmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hnbmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.710: INFO: Pod "webserver-deployment-566f96c878-fklb9" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-fklb9 webserver-deployment-566f96c878- deployment-1196 f41da88b-faec-4086-9a6b-a24c5053f94a 9544 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc0051d64a0 0xc0051d64a1}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fnlz2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fnlz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.710: INFO: Pod "webserver-deployment-566f96c878-hqgfs" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-hqgfs webserver-deployment-566f96c878- deployment-1196 6dee80d7-e212-462b-87f6-49c19ab484e1 9480 0 2023-01-17 15:21:54 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc0051d66c0 0xc0051d66c1}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.60\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-d6jtx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d6jtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.1.60,StartTime:2023-01-17 15:21:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.60,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.710: INFO: Pod "webserver-deployment-566f96c878-lcm55" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-lcm55 webserver-deployment-566f96c878- deployment-1196 52d4faf6-aba1-4435-8887-1d62950276cc 9646 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc0051d6900 0xc0051d6901}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.30\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rf8bn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rf8bn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-krt1ur,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.30,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.30,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.711: INFO: Pod "webserver-deployment-566f96c878-mxbsb" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-mxbsb webserver-deployment-566f96c878- deployment-1196 fd47baaa-f089-40d2-95aa-4cb539495fe7 9660 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc0051d6b10 0xc0051d6b11}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.31\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-h5lqb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h5lqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-krt1ur,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.31,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.31,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.711: INFO: Pod "webserver-deployment-566f96c878-q2dlj" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-q2dlj webserver-deployment-566f96c878- deployment-1196 420363cf-2678-434c-837e-63043e6dc3a0 9477 0 2023-01-17 15:21:54 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc0051d6d80 0xc0051d6d81}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.29\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sc4kb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sc4kb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-krt1ur,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.29,StartTime:2023-01-17 15:21:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.29,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.711: INFO: Pod "webserver-deployment-566f96c878-s8qnb" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-s8qnb webserver-deployment-566f96c878- deployment-1196 1b2feedb-e5d8-4818-b2e6-4d0fd4f44afd 9563 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc0051d7020 0xc0051d7021}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-qj7nh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qj7nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-dueiad,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.711: INFO: Pod "webserver-deployment-566f96c878-sdm7h" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-sdm7h webserver-deployment-566f96c878- deployment-1196 828f2ddb-683f-4eac-8a21-b00e947b0f49 9484 0 2023-01-17 15:21:54 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc0051d72a0 0xc0051d72a1}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.59\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bjfdw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bjfdw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.1.59,StartTime:2023-01-17 15:21:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.59,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.712: INFO: Pod "webserver-deployment-566f96c878-x2sxh" is not available: &Pod{ObjectMeta:{webserver-deployment-566f96c878-x2sxh webserver-deployment-566f96c878- deployment-1196 7bdc9191-82fa-4a26-a522-4de12420ffd2 9552 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:566f96c878] map[] [{apps/v1 ReplicaSet webserver-deployment-566f96c878 48ec0db1-f578-4783-b51a-2308b2f362aa 0xc0051d7520 0xc0051d7521}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"48ec0db1-f578-4783-b51a-2308b2f362aa\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} }]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-rh64r,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rh64r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.712: INFO: Pod "webserver-deployment-5d9fdcc779-2m28k" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-2m28k webserver-deployment-5d9fdcc779- deployment-1196 1137514d-52c7-4598-b307-a7f5a2d3dd66 9515 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc0051d76c0 0xc0051d76c1}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-psh2q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-psh2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.712: INFO: Pod "webserver-deployment-5d9fdcc779-4r4bs" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-4r4bs webserver-deployment-5d9fdcc779- deployment-1196 0da7c72c-58c7-4175-8e7d-207264e7ea36 9576 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc0051d7940 0xc0051d7941}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-vrrgb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vrrgb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-krt1ur,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.712: INFO: Pod "webserver-deployment-5d9fdcc779-4ssvc" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-4ssvc webserver-deployment-5d9fdcc779- deployment-1196 5df84ffd-94eb-44e0-9d70-ff5ce669645a 9531 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc0051d7b70 0xc0051d7b71}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bcbd2,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcbd2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-krt1ur,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.713: INFO: Pod "webserver-deployment-5d9fdcc779-7xbtt" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-7xbtt webserver-deployment-5d9fdcc779- deployment-1196 e0524ab4-b1e3-45ae-a3a7-8870029e0e8f 9380 0 2023-01-17 15:21:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc0051d7dd0 0xc0051d7dd1}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.36\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ghc9h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ghc9h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-dueiad,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.36,StartTime:2023-01-17 15:21:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:21:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://5eddb1f53042a3fa88fabcc66e7297e9e7ce72027c9c7f78b5d4b9b4fb244faa,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.36,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.713: INFO: Pod "webserver-deployment-5d9fdcc779-8g9ts" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-8g9ts webserver-deployment-5d9fdcc779- deployment-1196 9367052a-27c3-4168-a720-bcdfcfb957ff 9653 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc0051d7fd0 0xc0051d7fd1}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zggmn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zggmn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.713: INFO: Pod "webserver-deployment-5d9fdcc779-9rcjs" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-9rcjs webserver-deployment-5d9fdcc779- deployment-1196 9c477c95-9ea6-4a42-9a22-5e75981423f0 9539 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005206190 0xc005206191}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-hv2tc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hv2tc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.713: INFO: Pod "webserver-deployment-5d9fdcc779-bdtfw" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-bdtfw webserver-deployment-5d9fdcc779- deployment-1196 a37e1c6e-d1f9-4008-9b22-4e76e429ecd3 9654 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005206370 0xc005206371}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:58 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.39\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gffjq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gffjq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-dueiad,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.39,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:21:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://adb63efa0a3e4b91ca9e5c23b7e36a3c449fdc4dec701078c3bdd2a4e15a3ce9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.39,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.713: INFO: Pod "webserver-deployment-5d9fdcc779-blm82" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-blm82 webserver-deployment-5d9fdcc779- deployment-1196 480d6869-3196-46c4-8349-a680cb8a9c78 9369 0 2023-01-17 15:21:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005206580 0xc005206581}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-jp49g,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jp49g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.42,StartTime:2023-01-17 15:21:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:21:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://43cbe72d53000f8da5b091fda612f5a67872e3d46dcd1a336f457a428ab5a742,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.713: INFO: Pod "webserver-deployment-5d9fdcc779-chxcb" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-chxcb webserver-deployment-5d9fdcc779- deployment-1196 afafc123-b3c8-436e-add5-cd99912fb669 9573 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005206750 0xc005206751}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-srftv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-srftv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.714: INFO: Pod "webserver-deployment-5d9fdcc779-fbj57" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-fbj57 webserver-deployment-5d9fdcc779- deployment-1196 624fa4c9-f1ae-4d0f-b13e-15a606d7997e 9399 0 2023-01-17 15:21:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005206900 0xc005206901}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.57\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-92zjj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-92zjj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.1.57,StartTime:2023-01-17 15:21:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:21:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://4f31f1897a3261d368566b043f7aac567b427b2ae7fec9d988efc57e66b6134f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.57,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.714: INFO: Pod "webserver-deployment-5d9fdcc779-gcj22" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-gcj22 webserver-deployment-5d9fdcc779- deployment-1196 d4616129-8adf-4672-a1ed-6e68928ccb25 9545 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005206ad0 0xc005206ad1}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lptmw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lptmw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-dueiad,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.714: INFO: Pod "webserver-deployment-5d9fdcc779-gfzms" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-gfzms webserver-deployment-5d9fdcc779- deployment-1196 089dbfc9-9a1d-4f58-a6db-62ad74337575 9383 0 2023-01-17 15:21:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005206c80 0xc005206c81}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.6.35\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xpz84,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xpz84,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-dueiad,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.6.35,StartTime:2023-01-17 15:21:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:21:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://c089a92aabaa898135a592412a737c0202022c73cec8490112b6d4e64b861321,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.6.35,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.715: INFO: Pod "webserver-deployment-5d9fdcc779-jhv4q" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-jhv4q webserver-deployment-5d9fdcc779- deployment-1196 285e6111-7a5b-4627-97d4-fdbf463bb29a 9520 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005206e50 0xc005206e51}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-djq2n,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-djq2n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.715: INFO: Pod "webserver-deployment-5d9fdcc779-l7vgj" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-l7vgj webserver-deployment-5d9fdcc779- deployment-1196 d0ade873-8fac-437c-b413-cd04d0886665 9390 0 2023-01-17 15:21:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005207000 0xc005207001}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.28\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-fbmbd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fbmbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-krt1ur,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.28,StartTime:2023-01-17 15:21:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:21:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://61485472951e8a9b259582054efc54730ee58ac3a638c27098bed91c6deb07bd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.28,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.715: INFO: Pod "webserver-deployment-5d9fdcc779-pqfvs" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-pqfvs webserver-deployment-5d9fdcc779- deployment-1196 479152d9-a577-4970-b315-60dd145b4bd7 9372 0 2023-01-17 15:21:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005207220 0xc005207221}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:53 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.0.43\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-b4sb9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b4sb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:192.168.0.43,StartTime:2023-01-17 15:21:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:21:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://9e8e84e6e14ceb7b4f388e84091ee0a66a2ae0db0d90c929a5c06ee9535727ae,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.0.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.717: INFO: Pod "webserver-deployment-5d9fdcc779-prds5" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-prds5 webserver-deployment-5d9fdcc779- deployment-1196 ce5eeaf8-fd5c-422a-ad0e-1be968777955 9561 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc0052073f0 0xc0052073f1}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-dj4n5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dj4n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.717: INFO: Pod "webserver-deployment-5d9fdcc779-tcpn7" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-tcpn7 webserver-deployment-5d9fdcc779- deployment-1196 cf500145-56db-47c0-8638-c9a828e7020c 9587 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005207620 0xc005207621}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:57 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-bfrq6,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bfrq6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-dueiad,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.718: INFO: Pod "webserver-deployment-5d9fdcc779-v7s5m" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-v7s5m webserver-deployment-5d9fdcc779- deployment-1196 62286518-3bd2-4da2-9929-cd955716b372 9387 0 2023-01-17 15:21:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005207800 0xc005207801}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.27\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ql8n5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ql8n5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-krt1ur,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.27,StartTime:2023-01-17 15:21:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:21:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://0e59587df892a643c6f905fb975268b206759156fd5e838c85b47e2842b8f91c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.27,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.718: INFO: Pod "webserver-deployment-5d9fdcc779-x6n26" is available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-x6n26 webserver-deployment-5d9fdcc779- deployment-1196 2d6dd9fa-5f37-4329-9395-52eb4d64c577 9393 0 2023-01-17 15:21:52 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005207a10 0xc005207a11}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:52 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:54 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.1.58\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-9vcdr,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9vcdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.5,PodIP:192.168.1.58,StartTime:2023-01-17 15:21:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2023-01-17 15:21:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://f476331a1e1425903cfb6ae69bd8e04ac0217f719c67f6863870f16814038604,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.1.58,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jan 17 15:21:58.719: INFO: Pod "webserver-deployment-5d9fdcc779-zb76k" is not available: &Pod{ObjectMeta:{webserver-deployment-5d9fdcc779-zb76k webserver-deployment-5d9fdcc779- deployment-1196 725fe581-8cad-44ce-8cbf-47fc0bdc9452 9571 0 2023-01-17 15:21:56 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:5d9fdcc779] map[] [{apps/v1 ReplicaSet webserver-deployment-5d9fdcc779 f378b5b8-b0ba-4249-b439-a77b949f05f2 0xc005207c60 0xc005207c61}] [] [{kube-controller-manager Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f378b5b8-b0ba-4249-b439-a77b949f05f2\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2023-01-17 15:21:56 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-wtf4v,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wtf4v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-lxkik8-worker-krt1ur,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2023-01-17 15:21:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2023-01-17 15:21:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:21:58.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-1196" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":-1,"completed":29,"skipped":700,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:20:11.982: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-2367 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-2367 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-2367 I0117 15:20:12.096637 18 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-2367, replica count: 3 I0117 15:20:15.148418 18 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 17 15:20:15.155: INFO: Creating new exec pod Jan 17 15:20:18.174: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:20.358: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:20.358: INFO: stdout: "" Jan 17 15:20:21.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:23.563: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:23.563: INFO: stdout: "" Jan 17 15:20:24.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:26.578: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:26.578: INFO: stdout: "" Jan 17 15:20:27.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:29.541: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:29.541: INFO: stdout: "" Jan 17 15:20:30.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:32.561: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:32.561: INFO: stdout: "" Jan 17 15:20:33.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:35.552: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:35.552: INFO: stdout: "" Jan 17 15:20:36.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:38.544: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:38.545: INFO: stdout: "" Jan 17 15:20:39.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:41.540: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:41.540: INFO: stdout: "" Jan 17 15:20:42.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:44.547: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:44.548: INFO: stdout: "" Jan 17 15:20:45.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:47.520: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:47.520: INFO: stdout: "" Jan 17 15:20:48.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:50.525: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:50.525: INFO: stdout: "" Jan 17 15:20:51.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:53.520: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:53.520: INFO: stdout: "" Jan 17 15:20:54.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:56.524: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:56.524: INFO: stdout: "" Jan 17 15:20:57.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:20:59.533: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:20:59.533: INFO: stdout: "" Jan 17 15:21:00.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:02.520: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:02.520: INFO: stdout: "" Jan 17 15:21:03.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:05.532: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:05.532: INFO: stdout: "" Jan 17 15:21:06.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:08.547: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:08.547: INFO: stdout: "" Jan 17 15:21:09.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:11.552: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:11.552: INFO: stdout: "" Jan 17 15:21:12.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:14.546: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:14.546: INFO: stdout: "" Jan 17 15:21:15.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:17.599: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:17.599: INFO: stdout: "" Jan 17 15:21:18.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:20.543: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:20.543: INFO: stdout: "" Jan 17 15:21:21.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:23.580: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:23.580: INFO: stdout: "" Jan 17 15:21:24.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:26.543: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:26.544: INFO: stdout: "" Jan 17 15:21:27.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:29.551: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:29.551: INFO: stdout: "" Jan 17 15:21:30.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:32.548: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:32.548: INFO: stdout: "" Jan 17 15:21:33.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:35.534: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:35.534: INFO: stdout: "" Jan 17 15:21:36.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:38.555: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:38.555: INFO: stdout: "" Jan 17 15:21:39.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:41.541: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:41.541: INFO: stdout: "" Jan 17 15:21:42.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:44.553: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:44.553: INFO: stdout: "" Jan 17 15:21:45.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:47.535: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:47.535: INFO: stdout: "" Jan 17 15:21:48.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:50.531: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:50.532: INFO: stdout: "" Jan 17 15:21:51.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:53.609: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:53.610: INFO: stdout: "" Jan 17 15:21:54.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:56.521: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:56.521: INFO: stdout: "" Jan 17 15:21:57.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:21:59.823: INFO: stderr: "+ nc -v -t -w 2 affinity-clusterip 80\n+ echo hostName\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:21:59.823: INFO: stdout: "" Jan 17 15:22:00.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:22:02.560: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:22:02.560: INFO: stdout: "" Jan 17 15:22:03.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:22:05.611: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:22:05.611: INFO: stdout: "" Jan 17 15:22:06.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:22:08.587: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:22:08.587: INFO: stdout: "" Jan 17 15:22:09.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:22:11.608: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:22:11.608: INFO: stdout: "" Jan 17 15:22:12.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:22:14.550: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:22:14.550: INFO: stdout: "" Jan 17 15:22:15.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:22:17.543: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:22:17.543: INFO: stdout: "" Jan 17 15:22:18.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:22:20.539: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:22:20.539: INFO: stdout: "" Jan 17 15:22:20.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-2367 exec execpod-affinityjch5w -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:22:22.754: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:22:22.754: INFO: stdout: "" Jan 17 15:22:22.754: FAIL: Unexpected error: <*errors.errorString | 0xc0045f6030>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol occurred Full Stack Trace k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBServiceWithOptionalTransition(0x7101987, {0x7b06bd0, 0xc004854780}, 0xc00073ef00, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 +0x669 k8s.io/kubernetes/test/e2e/network.execAffinityTestForNonLBService(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3266 k8s.io/kubernetes/test/e2e/network.glob..func24.24() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:2067 +0x8d k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x243a8f9) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000337d40, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a Jan 17 15:22:22.755: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-2367, will wait for the garbage collector to delete the pods Jan 17 15:22:22.829: INFO: Deleting ReplicationController affinity-clusterip took: 5.767033ms Jan 17 15:22:22.930: INFO: Terminating ReplicationController affinity-clusterip pods took: 100.538056ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:22:24.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-2367" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[91m�[1m• Failure [132.907 seconds]�[0m [sig-network] Services �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 17 15:22:22.755: Unexpected error: <*errors.errorString | 0xc0045f6030>: { s: "service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol", } service is not reachable within 2m0s timeout on endpoint affinity-clusterip:80 over TCP protocol occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:3311 �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:21:58.878: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5275.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5275.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5275.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5275.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5275.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5275.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5275.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5275.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5275.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 66.185.142.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.142.185.66_udp@PTR;check="$$(dig +tcp +noall +answer +search 66.185.142.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.142.185.66_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5275.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5275.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5275.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5275.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5275.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5275.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5275.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5275.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5275.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5275.svc.cluster.local;check="$$(dig +notcp +noall +answer +search 66.185.142.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.142.185.66_udp@PTR;check="$$(dig +tcp +noall +answer +search 66.185.142.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.142.185.66_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 17 15:22:02.976: INFO: Unable to read wheezy_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:02.980: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:02.983: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:02.987: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:03.008: INFO: Unable to read jessie_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:03.012: INFO: Unable to read jessie_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:03.016: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:03.021: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:03.036: INFO: Lookups using dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532 failed for: [wheezy_udp@dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_udp@dns-test-service.dns-5275.svc.cluster.local jessie_tcp@dns-test-service.dns-5275.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local] Jan 17 15:22:08.042: INFO: Unable to read wheezy_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:08.049: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:08.054: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:08.060: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:08.087: INFO: Unable to read jessie_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:08.092: INFO: Unable to read jessie_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:08.097: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:08.103: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:08.121: INFO: Lookups using dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532 failed for: [wheezy_udp@dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_udp@dns-test-service.dns-5275.svc.cluster.local jessie_tcp@dns-test-service.dns-5275.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local] Jan 17 15:22:13.045: INFO: Unable to read wheezy_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:13.053: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:13.060: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:13.073: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:13.119: INFO: Unable to read jessie_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:13.128: INFO: Unable to read jessie_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:13.137: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:13.146: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:13.187: INFO: Lookups using dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532 failed for: [wheezy_udp@dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_udp@dns-test-service.dns-5275.svc.cluster.local jessie_tcp@dns-test-service.dns-5275.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local] Jan 17 15:22:18.043: INFO: Unable to read wheezy_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:18.049: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:18.056: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:18.062: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:18.098: INFO: Unable to read jessie_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:18.106: INFO: Unable to read jessie_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:18.113: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:18.120: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:18.148: INFO: Lookups using dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532 failed for: [wheezy_udp@dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_udp@dns-test-service.dns-5275.svc.cluster.local jessie_tcp@dns-test-service.dns-5275.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local] Jan 17 15:22:23.042: INFO: Unable to read wheezy_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:23.048: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:23.053: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:23.058: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:23.081: INFO: Unable to read jessie_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:23.086: INFO: Unable to read jessie_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:23.091: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:23.096: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:23.112: INFO: Lookups using dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532 failed for: [wheezy_udp@dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_udp@dns-test-service.dns-5275.svc.cluster.local jessie_tcp@dns-test-service.dns-5275.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local] Jan 17 15:22:28.044: INFO: Unable to read wheezy_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:28.050: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:28.057: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:28.063: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:28.098: INFO: Unable to read jessie_udp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:28.105: INFO: Unable to read jessie_tcp@dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:28.117: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:28.123: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local from pod dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532: the server could not find the requested resource (get pods dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532) Jan 17 15:22:28.151: INFO: Lookups using dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532 failed for: [wheezy_udp@dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@dns-test-service.dns-5275.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_udp@dns-test-service.dns-5275.svc.cluster.local jessie_tcp@dns-test-service.dns-5275.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5275.svc.cluster.local] Jan 17 15:22:33.154: INFO: DNS probes using dns-5275/dns-test-c782a127-7d4d-406e-9bc9-e0d21b5ea532 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:22:33.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-5275" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":-1,"completed":30,"skipped":747,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":51,"skipped":1038,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:22:24.893: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:752 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-5768 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-5768 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-5768 I0117 15:22:24.958578 18 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-5768, replica count: 3 I0117 15:22:28.010235 18 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jan 17 15:22:28.019: INFO: Creating new exec pod Jan 17 15:22:31.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5768 exec execpod-affinityzfqn7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Jan 17 15:22:31.276: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Jan 17 15:22:31.276: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 17 15:22:31.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5768 exec execpod-affinityzfqn7 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.142.113.27 80' Jan 17 15:22:31.516: INFO: stderr: "+ + nc -v -t -w 2 10.142.113.27 80\necho hostName\nConnection to 10.142.113.27 80 port [tcp/http] succeeded!\n" Jan 17 15:22:31.516: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Jan 17 15:22:31.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5768 exec execpod-affinityzfqn7 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.142.113.27:80/ ; done' Jan 17 15:22:31.860: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.142.113.27:80/\n" Jan 17 15:22:31.860: INFO: stdout: "\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96\naffinity-clusterip-j4h96" Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Received response from host: affinity-clusterip-j4h96 Jan 17 15:22:31.860: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-5768, will wait for the garbage collector to delete the pods Jan 17 15:22:31.946: INFO: Deleting ReplicationController affinity-clusterip took: 13.065818ms Jan 17 15:22:32.048: INFO: Terminating ReplicationController affinity-clusterip pods took: 101.293626ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:22:34.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5768" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:756 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":52,"skipped":1038,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:22:33.526: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should mount an API token into pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting the auto-created API token �[1mSTEP�[0m: reading a file in the container Jan 17 15:22:36.108: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3774 pod-service-account-9fd2a39d-90f5-4aad-9f0c-a66bd6dfc6d3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' �[1mSTEP�[0m: reading a file in the container Jan 17 15:22:36.268: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3774 pod-service-account-9fd2a39d-90f5-4aad-9f0c-a66bd6dfc6d3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' �[1mSTEP�[0m: reading a file in the container Jan 17 15:22:36.464: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3774 pod-service-account-9fd2a39d-90f5-4aad-9f0c-a66bd6dfc6d3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:22:36.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-3774" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":-1,"completed":31,"skipped":803,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:22:34.244: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-608a56fa-9728-424f-83ce-6293c93250a9 �[1mSTEP�[0m: Creating a pod to test consume configMaps Jan 17 15:22:34.294: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8fa04bfc-1f74-4562-a813-6f2e2b86eabc" in namespace "projected-9355" to be "Succeeded or Failed" Jan 17 15:22:34.301: INFO: Pod "pod-projected-configmaps-8fa04bfc-1f74-4562-a813-6f2e2b86eabc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.497798ms Jan 17 15:22:36.308: INFO: Pod "pod-projected-configmaps-8fa04bfc-1f74-4562-a813-6f2e2b86eabc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01341801s Jan 17 15:22:38.313: INFO: Pod "pod-projected-configmaps-8fa04bfc-1f74-4562-a813-6f2e2b86eabc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019017332s �[1mSTEP�[0m: Saw pod success Jan 17 15:22:38.314: INFO: Pod "pod-projected-configmaps-8fa04bfc-1f74-4562-a813-6f2e2b86eabc" satisfied condition "Succeeded or Failed" Jan 17 15:22:38.318: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod pod-projected-configmaps-8fa04bfc-1f74-4562-a813-6f2e2b86eabc container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:22:38.347: INFO: Waiting for pod pod-projected-configmaps-8fa04bfc-1f74-4562-a813-6f2e2b86eabc to disappear Jan 17 15:22:38.351: INFO: Pod pod-projected-configmaps-8fa04bfc-1f74-4562-a813-6f2e2b86eabc no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:22:38.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9355" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1047,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:22:38.375: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: updating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: patching the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:22:44.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-7041" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":54,"skipped":1050,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:22:44.577: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation Jan 17 15:22:44.612: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation Jan 17 15:22:55.748: INFO: >>> kubeConfig: /tmp/kubeconfig Jan 17 15:22:58.101: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:23:09.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-7760" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":-1,"completed":55,"skipped":1065,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:23:09.704: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pods Jan 17 15:23:09.764: INFO: created test-pod-1 Jan 17 15:23:09.771: INFO: created test-pod-2 Jan 17 15:23:09.787: INFO: created test-pod-3 �[1mSTEP�[0m: waiting for all 3 pods to be running Jan 17 15:23:09.787: INFO: Waiting up to 5m0s for all pods (need at least 3) in namespace 'pods-7611' to be running and ready Jan 17 15:23:09.827: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 17 15:23:09.828: INFO: The status of Pod test-pod-2 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 17 15:23:09.828: INFO: The status of Pod test-pod-3 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 17 15:23:09.828: INFO: 0 / 3 pods in namespace 'pods-7611' are running and ready (0 seconds elapsed) Jan 17 15:23:09.828: INFO: expected 0 pod replicas in namespace 'pods-7611', 0 are Running and Ready. Jan 17 15:23:09.828: INFO: POD NODE PHASE GRACE CONDITIONS Jan 17 15:23:09.828: INFO: test-pod-1 k8s-upgrade-and-conformance-lxkik8-worker-dueiad Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-17 15:23:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-17 15:23:09 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-17 15:23:09 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-17 15:23:09 +0000 UTC }] Jan 17 15:23:09.828: INFO: test-pod-2 k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-17 15:23:09 +0000 UTC }] Jan 17 15:23:09.828: INFO: test-pod-3 k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9 Pending [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-17 15:23:09 +0000 UTC }] Jan 17 15:23:09.828: INFO: Jan 17 15:23:11.856: INFO: The status of Pod test-pod-1 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed Jan 17 15:23:11.856: INFO: 2 / 3 pods in namespace 'pods-7611' are running and ready (2 seconds elapsed) Jan 17 15:23:11.856: INFO: expected 0 pod replicas in namespace 'pods-7611', 0 are Running and Ready. Jan 17 15:23:11.856: INFO: POD NODE PHASE GRACE CONDITIONS Jan 17 15:23:11.856: INFO: test-pod-1 k8s-upgrade-and-conformance-lxkik8-worker-dueiad Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2023-01-17 15:23:09 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2023-01-17 15:23:09 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2023-01-17 15:23:09 +0000 UTC ContainersNotReady containers with unready status: [token-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2023-01-17 15:23:09 +0000 UTC }] Jan 17 15:23:11.856: INFO: Jan 17 15:23:13.847: INFO: 3 / 3 pods in namespace 'pods-7611' are running and ready (4 seconds elapsed) Jan 17 15:23:13.847: INFO: expected 0 pod replicas in namespace 'pods-7611', 0 are Running and Ready. �[1mSTEP�[0m: waiting for all pods to be deleted Jan 17 15:23:13.894: INFO: Pod quantity 3 is different from expected quantity 0 Jan 17 15:23:14.899: INFO: Pod quantity 2 is different from expected quantity 0 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:23:15.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-7611" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":56,"skipped":1067,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":-1,"completed":25,"skipped":429,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","[sig-network] DNS should provide DNS for the cluster [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:18:47.119: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Jan 17 15:22:22.419: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-550/dns-test-4f0554a6-86a3-49d0-a44c-2720a4978b5d: the server is currently unable to handle the request (get pods dns-test-4f0554a6-86a3-49d0-a44c-2720a4978b5d) Jan 17 15:23:49.178: FAIL: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-550/dns-test-4f0554a6-86a3-49d0-a44c-2720a4978b5d: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-550/pods/dns-test-4f0554a6-86a3-49d0-a44c-2720a4978b5d/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f3325df52b8, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d5088, 0xc000056080}, 0xc003057b58) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d5088, 0xc000056080}, 0x98, 0x2cc8045, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d5088, 0xc000056080}, 0x4a, 0xc003057be8, 0x2441ec7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78b5a40, 0xc00005c880, 0xc003057c30) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00193f600, 0x4, 0x4}, {0x70d4db2, 0x7}, 0xc003769000, {0x7b06bd0, 0xc00362ea80}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00111ba20, 0xc003769000, {0xc00193f600, 0x4, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x43e k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00070eea0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a E0117 15:23:49.179003 17 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Jan 17 15:23:49.178: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-550/dns-test-4f0554a6-86a3-49d0-a44c-2720a4978b5d: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-550/pods/dns-test-4f0554a6-86a3-49d0-a44c-2720a4978b5d/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:220, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f3325df52b8, 0x0})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d5088, 0xc000056080}, 0xc003057b58)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d5088, 0xc000056080}, 0x98, 0x2cc8045, 0x68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d5088, 0xc000056080}, 0x4a, 0xc003057be8, 0x2441ec7)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78b5a40, 0xc00005c880, 0xc003057c30)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50\nk8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00193f600, 0x4, 0x4}, {0x70d4db2, 0x7}, 0xc003769000, {0x7b06bd0, 0xc00362ea80}, 0x0, {0x0, ...})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00111ba20, 0xc003769000, {0xc00193f600, 0x4, 0x4})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470\nk8s.io/kubernetes/test/e2e/network.glob..func2.1()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x43e\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697\nk8s.io/kubernetes/test/e2e.TestE2E(0x0)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc00070eea0, 0x735e880)\n\t/usr/local/go/src/testing/testing.go:1259 +0x102\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1306 +0x35a"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 136 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6c3d000, 0xc002cf6100}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00007c290}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75 panic({0x6c3d000, 0xc002cf6100}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x73 panic({0x62d5960, 0x78abec0}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc0009e4840, 0x157}, {0xc0030575f0, 0x0, 0x40}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc0009e4840, 0x157}, {0xc0030576d0, 0x70cbf6a, 0xc0030576f8}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7 k8s.io/kubernetes/test/e2e/framework.Failf({0x717cde2, 0x2d}, {0xc003057940, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x131 k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x889 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f3325df52b8, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x79d5088, 0xc000056080}, 0xc003057b58) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x79d5088, 0xc000056080}, 0x98, 0x2cc8045, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x79d5088, 0xc000056080}, 0x4a, 0xc003057be8, 0x2441ec7) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78b5a40, 0xc00005c880, 0xc003057c30) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc00193f600, 0x4, 0x4}, {0x70d4db2, 0x7}, 0xc003769000, {0x7b06bd0, 0xc00362ea80}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00111ba20, 0xc003769000, {0xc00193f600, 0x4, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:64 +0x43e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00070f040) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xba k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0030595c8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x125 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00063ca50, 0xc003059990, {0x78b5a40, 0xc00005c880}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x2a9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00063ca50, {0x78b5a40, 0xc00005c880}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xe7 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc00036e000, 0xc00063ca50) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0xe5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc00036e000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x1a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc00036e000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0xc5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00019a070, {0x7f332552c510, 0xc00070eea0}, {0x710baa9, 0x40}, {0xc0009ea420, 0x3, 0x3}, {0x7a2d318, 0xc00005c880}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x4d2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x78bc3a0, 0xc00070eea0}, {0x710baa9, 0x14}, {0xc00005d640, 0x3, 0x6}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x185 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x78bc3a0, 0xc00070eea0}, {0x710baa9, 0x14}, {0xc00007f780, 0x2, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0xf9 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24c66d7) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc00070eea0, 0x735e880) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:23:49.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-550" for this suite. �[91m�[1m• Failure [302.150 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould provide DNS for the cluster [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mJan 17 15:23:49.178: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-550/dns-test-4f0554a6-86a3-49d0-a44c-2720a4978b5d: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-550/pods/dns-test-4f0554a6-86a3-49d0-a44c-2720a4978b5d/proxy/results/wheezy_tcp@kubernetes.default.svc.cluster.local": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:23:16.108: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-2940 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a new StatefulSet Jan 17 15:23:16.180: INFO: Found 0 stateful pods, waiting for 3 Jan 17 15:23:26.191: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 17 15:23:26.191: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 17 15:23:26.191: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Updating stateful set template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Jan 17 15:23:26.248: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Not applying an update when the partition is greater than the number of replicas �[1mSTEP�[0m: Performing a canary update Jan 17 15:23:36.336: INFO: Updating stateful set ss2 Jan 17 15:23:36.357: INFO: Waiting for Pod statefulset-2940/ss2-2 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb �[1mSTEP�[0m: Restoring Pods to the correct revision when they are deleted Jan 17 15:23:46.437: INFO: Found 1 stateful pods, waiting for 3 Jan 17 15:23:56.444: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 17 15:23:56.444: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 17 15:23:56.444: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Performing a phased rolling update Jan 17 15:23:56.485: INFO: Updating stateful set ss2 Jan 17 15:23:56.498: INFO: Waiting for Pod statefulset-2940/ss2-1 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb Jan 17 15:24:06.540: INFO: Updating stateful set ss2 Jan 17 15:24:06.556: INFO: Waiting for StatefulSet statefulset-2940/ss2 to complete update Jan 17 15:24:06.556: INFO: Waiting for Pod statefulset-2940/ss2-0 to have revision ss2-5f8764d585 update revision ss2-57bbdd95cb [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 17 15:24:16.569: INFO: Deleting all statefulset in ns statefulset-2940 Jan 17 15:24:16.575: INFO: Scaling statefulset ss2 to 0 Jan 17 15:24:26.604: INFO: Waiting for statefulset status.replicas updated to 0 Jan 17 15:24:26.612: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:24:26.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-2940" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":-1,"completed":57,"skipped":1141,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:22:36.821: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Jan 17 15:24:37.424: INFO: Successfully updated pod "var-expansion-b4dd6e92-46d2-4c26-b068-6b2ad16f3c6f" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Jan 17 15:24:39.440: INFO: Deleting pod "var-expansion-b4dd6e92-46d2-4c26-b068-6b2ad16f3c6f" in namespace "var-expansion-3282" Jan 17 15:24:39.450: INFO: Wait up to 5m0s for pod "var-expansion-b4dd6e92-46d2-4c26-b068-6b2ad16f3c6f" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:25:11.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-3282" for this suite. �[32m• [SLOW TEST:154.676 seconds]�[0m [sig-node] Variable Expansion �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":32,"skipped":876,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:25:11.516: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Jan 17 15:25:11.609: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:25:13.618: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Jan 17 15:25:13.651: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:25:15.665: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Jan 17 15:25:15.688: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 17 15:25:15.700: INFO: Pod pod-with-prestop-http-hook still exists Jan 17 15:25:17.701: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 17 15:25:17.707: INFO: Pod pod-with-prestop-http-hook still exists Jan 17 15:25:19.701: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jan 17 15:25:19.709: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:25:19.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-404" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":881,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:25:19.835: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-map-361446f9-d09b-4cf4-b177-2ff7be47af37 �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 17 15:25:19.927: INFO: Waiting up to 5m0s for pod "pod-secrets-99d5b60d-af2f-4323-b22f-63e484fc9b88" in namespace "secrets-4144" to be "Succeeded or Failed" Jan 17 15:25:19.936: INFO: Pod "pod-secrets-99d5b60d-af2f-4323-b22f-63e484fc9b88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.929879ms Jan 17 15:25:21.951: INFO: Pod "pod-secrets-99d5b60d-af2f-4323-b22f-63e484fc9b88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023329084s Jan 17 15:25:23.958: INFO: Pod "pod-secrets-99d5b60d-af2f-4323-b22f-63e484fc9b88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030423573s �[1mSTEP�[0m: Saw pod success Jan 17 15:25:23.958: INFO: Pod "pod-secrets-99d5b60d-af2f-4323-b22f-63e484fc9b88" satisfied condition "Succeeded or Failed" Jan 17 15:25:23.968: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod pod-secrets-99d5b60d-af2f-4323-b22f-63e484fc9b88 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:25:24.012: INFO: Waiting for pod pod-secrets-99d5b60d-af2f-4323-b22f-63e484fc9b88 to disappear Jan 17 15:25:24.020: INFO: Pod pod-secrets-99d5b60d-af2f-4323-b22f-63e484fc9b88 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:25:24.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-4144" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":894,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:25:24.084: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 17 15:25:24.164: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2f9990e-6792-49c0-b565-d0ee79e61ce0" in namespace "projected-4996" to be "Succeeded or Failed" Jan 17 15:25:24.175: INFO: Pod "downwardapi-volume-d2f9990e-6792-49c0-b565-d0ee79e61ce0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.914225ms Jan 17 15:25:26.182: INFO: Pod "downwardapi-volume-d2f9990e-6792-49c0-b565-d0ee79e61ce0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017291891s Jan 17 15:25:28.190: INFO: Pod "downwardapi-volume-d2f9990e-6792-49c0-b565-d0ee79e61ce0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025373366s �[1mSTEP�[0m: Saw pod success Jan 17 15:25:28.190: INFO: Pod "downwardapi-volume-d2f9990e-6792-49c0-b565-d0ee79e61ce0" satisfied condition "Succeeded or Failed" Jan 17 15:25:28.196: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod downwardapi-volume-d2f9990e-6792-49c0-b565-d0ee79e61ce0 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:25:28.246: INFO: Waiting for pod downwardapi-volume-d2f9990e-6792-49c0-b565-d0ee79e61ce0 to disappear Jan 17 15:25:28.257: INFO: Pod downwardapi-volume-d2f9990e-6792-49c0-b565-d0ee79e61ce0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:25:28.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4996" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":35,"skipped":905,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:25:28.353: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 17 15:25:28.437: INFO: Waiting up to 5m0s for pod "downward-api-3d8402b3-6cdf-48c4-b0f6-4954ada1b7b5" in namespace "downward-api-3578" to be "Succeeded or Failed" Jan 17 15:25:28.447: INFO: Pod "downward-api-3d8402b3-6cdf-48c4-b0f6-4954ada1b7b5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.293904ms Jan 17 15:25:30.459: INFO: Pod "downward-api-3d8402b3-6cdf-48c4-b0f6-4954ada1b7b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020946698s Jan 17 15:25:32.469: INFO: Pod "downward-api-3d8402b3-6cdf-48c4-b0f6-4954ada1b7b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030948726s �[1mSTEP�[0m: Saw pod success Jan 17 15:25:32.469: INFO: Pod "downward-api-3d8402b3-6cdf-48c4-b0f6-4954ada1b7b5" satisfied condition "Succeeded or Failed" Jan 17 15:25:32.479: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-88xc9 pod downward-api-3d8402b3-6cdf-48c4-b0f6-4954ada1b7b5 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:25:32.527: INFO: Waiting for pod downward-api-3d8402b3-6cdf-48c4-b0f6-4954ada1b7b5 to disappear Jan 17 15:25:32.535: INFO: Pod downward-api-3d8402b3-6cdf-48c4-b0f6-4954ada1b7b5 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:25:32.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3578" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":930,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:25:32.577: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pod templates Jan 17 15:25:32.632: INFO: created test-podtemplate-1 Jan 17 15:25:32.640: INFO: created test-podtemplate-2 Jan 17 15:25:32.650: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Jan 17 15:25:32.660: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Jan 17 15:25:32.714: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:25:32.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-7554" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":37,"skipped":934,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:25:32.761: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-f8036596-876f-42d7-94c9-d4bcd540a3bf �[1mSTEP�[0m: Creating a pod to test consume secrets Jan 17 15:25:32.840: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2502dd7b-f1b9-4def-b57e-80929cf6fb12" in namespace "projected-2029" to be "Succeeded or Failed" Jan 17 15:25:32.845: INFO: Pod "pod-projected-secrets-2502dd7b-f1b9-4def-b57e-80929cf6fb12": Phase="Pending", Reason="", readiness=false. Elapsed: 4.808043ms Jan 17 15:25:34.856: INFO: Pod "pod-projected-secrets-2502dd7b-f1b9-4def-b57e-80929cf6fb12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016583779s Jan 17 15:25:36.864: INFO: Pod "pod-projected-secrets-2502dd7b-f1b9-4def-b57e-80929cf6fb12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024164187s �[1mSTEP�[0m: Saw pod success Jan 17 15:25:36.864: INFO: Pod "pod-projected-secrets-2502dd7b-f1b9-4def-b57e-80929cf6fb12" satisfied condition "Succeeded or Failed" Jan 17 15:25:36.869: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod pod-projected-secrets-2502dd7b-f1b9-4def-b57e-80929cf6fb12 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:25:36.907: INFO: Waiting for pod pod-projected-secrets-2502dd7b-f1b9-4def-b57e-80929cf6fb12 to disappear Jan 17 15:25:36.915: INFO: Pod pod-projected-secrets-2502dd7b-f1b9-4def-b57e-80929cf6fb12 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:25:36.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2029" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":936,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:25:37.155: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename server-version �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Request ServerVersion �[1mSTEP�[0m: Confirm major version Jan 17 15:25:37.206: INFO: Major version: 1 �[1mSTEP�[0m: Confirm minor version Jan 17 15:25:37.206: INFO: cleanMinorVersion: 23 Jan 17 15:25:37.206: INFO: Minor version: 23 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:25:37.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "server-version-3327" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":39,"skipped":1001,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:25:37.240: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1539 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Jan 17 15:25:37.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1040 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2' Jan 17 15:25:37.488: INFO: stderr: "" Jan 17 15:25:37.489: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1543 Jan 17 15:25:37.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1040 delete pods e2e-test-httpd-pod' Jan 17 15:25:39.767: INFO: stderr: "" Jan 17 15:25:39.767: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:25:39.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1040" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":40,"skipped":1004,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:24:26.923: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-2625 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a new StatefulSet Jan 17 15:24:26.990: INFO: Found 0 stateful pods, waiting for 3 Jan 17 15:24:36.997: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 17 15:24:36.997: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 17 15:24:36.997: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 17 15:24:37.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2625 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 17 15:24:37.357: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 17 15:24:37.357: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 17 15:24:37.357: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' �[1mSTEP�[0m: Updating StatefulSet template: update image from k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 to k8s.gcr.io/e2e-test-images/httpd:2.4.39-2 Jan 17 15:24:47.424: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Creating a new revision �[1mSTEP�[0m: Updating Pods in reverse ordinal order Jan 17 15:24:57.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2625 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 17 15:24:57.828: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 17 15:24:57.828: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 17 15:24:57.828: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' �[1mSTEP�[0m: Rolling back to a previous revision Jan 17 15:25:07.889: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2625 exec ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 17 15:25:08.277: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Jan 17 15:25:08.277: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 17 15:25:08.277: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 17 15:25:18.341: INFO: Updating stateful set ss2 �[1mSTEP�[0m: Rolling back update in reverse ordinal order Jan 17 15:25:28.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-2625 exec ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 17 15:25:28.758: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Jan 17 15:25:28.758: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 17 15:25:28.758: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Jan 17 15:25:38.808: INFO: Deleting all statefulset in ns statefulset-2625 Jan 17 15:25:38.816: INFO: Scaling statefulset ss2 to 0 Jan 17 15:25:48.862: INFO: Waiting for statefulset status.replicas updated to 0 Jan 17 15:25:48.868: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:25:48.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-2625" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":-1,"completed":58,"skipped":1219,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:25:49.091: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: delete a job �[1mSTEP�[0m: deleting Job.batch foo in namespace job-691, will wait for the garbage collector to delete the pods Jan 17 15:25:51.240: INFO: Deleting Job.batch foo took: 10.067238ms Jan 17 15:25:51.340: INFO: Terminating Job.batch foo pods took: 100.74592ms �[1mSTEP�[0m: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:26:24.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-691" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":59,"skipped":1258,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:25:39.836: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:59 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:26:39.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-5020" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":1015,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:26:40.027: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Jan 17 15:26:40.562: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Jan 17 15:26:43.604: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:26:43.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3434" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3434-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":42,"skipped":1045,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:26:24.233: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: set up a multi version CRD Jan 17 15:26:24.283: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:26:46.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-8855" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":60,"skipped":1292,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:26:43.919: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on node default medium Jan 17 15:26:43.997: INFO: Waiting up to 5m0s for pod "pod-5767f87e-5768-4c5e-b848-cf0f4d8f6946" in namespace "emptydir-3079" to be "Succeeded or Failed" Jan 17 15:26:44.007: INFO: Pod "pod-5767f87e-5768-4c5e-b848-cf0f4d8f6946": Phase="Pending", Reason="", readiness=false. Elapsed: 10.345637ms Jan 17 15:26:46.016: INFO: Pod "pod-5767f87e-5768-4c5e-b848-cf0f4d8f6946": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019046259s Jan 17 15:26:48.023: INFO: Pod "pod-5767f87e-5768-4c5e-b848-cf0f4d8f6946": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025617512s �[1mSTEP�[0m: Saw pod success Jan 17 15:26:48.023: INFO: Pod "pod-5767f87e-5768-4c5e-b848-cf0f4d8f6946" satisfied condition "Succeeded or Failed" Jan 17 15:26:48.028: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod pod-5767f87e-5768-4c5e-b848-cf0f4d8f6946 container test-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:26:48.055: INFO: Waiting for pod pod-5767f87e-5768-4c5e-b848-cf0f4d8f6946 to disappear Jan 17 15:26:48.060: INFO: Pod pod-5767f87e-5768-4c5e-b848-cf0f4d8f6946 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:26:48.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3079" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":1060,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:26:46.798: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Jan 17 15:26:46.853: INFO: Waiting up to 5m0s for pod "downward-api-761f6f95-91b4-41a7-80dc-c3606fd9aa2e" in namespace "downward-api-842" to be "Succeeded or Failed" Jan 17 15:26:46.859: INFO: Pod "downward-api-761f6f95-91b4-41a7-80dc-c3606fd9aa2e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.683604ms Jan 17 15:26:48.866: INFO: Pod "downward-api-761f6f95-91b4-41a7-80dc-c3606fd9aa2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012268654s Jan 17 15:26:50.872: INFO: Pod "downward-api-761f6f95-91b4-41a7-80dc-c3606fd9aa2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018230071s �[1mSTEP�[0m: Saw pod success Jan 17 15:26:50.872: INFO: Pod "downward-api-761f6f95-91b4-41a7-80dc-c3606fd9aa2e" satisfied condition "Succeeded or Failed" Jan 17 15:26:50.877: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod downward-api-761f6f95-91b4-41a7-80dc-c3606fd9aa2e container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:26:50.909: INFO: Waiting for pod downward-api-761f6f95-91b4-41a7-80dc-c3606fd9aa2e to disappear Jan 17 15:26:50.914: INFO: Pod downward-api-761f6f95-91b4-41a7-80dc-c3606fd9aa2e no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:26:50.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-842" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":1309,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:26:50.945: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename runtimeclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support RuntimeClasses API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/node.k8s.io �[1mSTEP�[0m: getting /apis/node.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: watching Jan 17 15:26:51.024: INFO: starting watch �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Jan 17 15:26:51.063: INFO: waiting for watch events with expected annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-node] RuntimeClass /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:26:51.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "runtimeclass-1271" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]","total":-1,"completed":62,"skipped":1314,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:26:48.089: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Jan 17 15:26:48.145: INFO: The status of Pod labelsupdate488bcb96-86bb-43c2-bb71-6a7e3d2dcc4d is Pending, waiting for it to be Running (with Ready = true) Jan 17 15:26:50.152: INFO: The status of Pod labelsupdate488bcb96-86bb-43c2-bb71-6a7e3d2dcc4d is Running (Ready = true) Jan 17 15:26:50.687: INFO: Successfully updated pod "labelsupdate488bcb96-86bb-43c2-bb71-6a7e3d2dcc4d" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:26:52.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-531" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":1064,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:26:51.125: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 17 15:26:51.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6d607c41-afc1-423d-88c1-42f0927608ef" in namespace "projected-829" to be "Succeeded or Failed" Jan 17 15:26:51.166: INFO: Pod "downwardapi-volume-6d607c41-afc1-423d-88c1-42f0927608ef": Phase="Pending", Reason="", readiness=false. Elapsed: 5.420772ms Jan 17 15:26:53.174: INFO: Pod "downwardapi-volume-6d607c41-afc1-423d-88c1-42f0927608ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013332313s Jan 17 15:26:55.182: INFO: Pod "downwardapi-volume-6d607c41-afc1-423d-88c1-42f0927608ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02103214s �[1mSTEP�[0m: Saw pod success Jan 17 15:26:55.182: INFO: Pod "downwardapi-volume-6d607c41-afc1-423d-88c1-42f0927608ef" satisfied condition "Succeeded or Failed" Jan 17 15:26:55.187: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-worker-dueiad pod downwardapi-volume-6d607c41-afc1-423d-88c1-42f0927608ef container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:26:55.215: INFO: Waiting for pod downwardapi-volume-6d607c41-afc1-423d-88c1-42f0927608ef to disappear Jan 17 15:26:55.218: INFO: Pod downwardapi-volume-6d607c41-afc1-423d-88c1-42f0927608ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:26:55.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-829" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":1319,"failed":1,"failures":["[sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Jan 17 15:26:52.804: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Jan 17 15:26:52.872: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9913752f-1844-48e5-b772-f9e8d09b74b1" in namespace "projected-9292" to be "Succeeded or Failed" Jan 17 15:26:52.882: INFO: Pod "downwardapi-volume-9913752f-1844-48e5-b772-f9e8d09b74b1": Phase="Pending", Reason="", readiness=false. Elapsed: 9.641013ms Jan 17 15:26:54.889: INFO: Pod "downwardapi-volume-9913752f-1844-48e5-b772-f9e8d09b74b1": Phase="Running", Reason="", readiness=true. Elapsed: 2.016714341s Jan 17 15:26:56.900: INFO: Pod "downwardapi-volume-9913752f-1844-48e5-b772-f9e8d09b74b1": Phase="Running", Reason="", readiness=false. Elapsed: 4.02827035s Jan 17 15:26:58.911: INFO: Pod "downwardapi-volume-9913752f-1844-48e5-b772-f9e8d09b74b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038873301s �[1mSTEP�[0m: Saw pod success Jan 17 15:26:58.911: INFO: Pod "downwardapi-volume-9913752f-1844-48e5-b772-f9e8d09b74b1" satisfied condition "Succeeded or Failed" Jan 17 15:26:58.917: INFO: Trying to get logs from node k8s-upgrade-and-conformance-lxkik8-md-0-g974z-f76d7886c-xqvh2 pod downwardapi-volume-9913752f-1844-48e5-b772-f9e8d09b74b1 container client-container: <nil> �[1mSTEP�[0m: delete the pod Jan 17 15:26:58.944: INFO: Waiting for pod downwardapi-volume-9913752f-1844-48e5-b772-f9e8d09b74b1 to disappear Jan 17 15:26:58.951: INFO: Pod downwardapi-volume-9913752f-1844-48e5-b772-f9e8d09b74b1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Jan 17 15:26:58.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9292" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":45,"skipped":1084,"failed":2,"failures":["[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mSï