Recent runs || View in Spyglass
Result | FAILURE |
Tests | 1 failed / 0 succeeded |
Started | |
Elapsed | 1h59m |
Revision | release-1.1 |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=capi\-e2e\sWhen\supgrading\sa\sworkload\scluster\sand\stesting\sK8S\sconformance\s\[Conformance\]\s\[K8s\-Upgrade\]\sShould\screate\sand\supgrade\sa\sworkload\scluster\sand\srun\skubetest$'
/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:115 Failed to run Kubernetes conformance Unexpected error: <*errors.withStack | 0xc001c88c60>: { error: <*errors.withMessage | 0xc0006a04e0>{ cause: <*errors.errorString | 0xc001092c90>{ s: "error container run failed with exit code 1", }, msg: "Unable to run conformance tests", }, stack: [0x1ad2fea, 0x1b134a8, 0x73c2fa, 0x73bcc5, 0x73b3bb, 0x741149, 0x740b27, 0x761fe5, 0x761d05, 0x761545, 0x7637f2, 0x76f9a5, 0x76f7be, 0x1b2de51, 0x5156c2, 0x46b2c1], } Unable to run conformance tests: error container run failed with exit code 1 occurred /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/cluster_upgrade.go:232
STEP: Creating a namespace for hosting the "k8s-upgrade-and-conformance" test spec INFO: Creating namespace k8s-upgrade-and-conformance-ee37ij INFO: Creating event watcher for namespace "k8s-upgrade-and-conformance-ee37ij" STEP: Creating a workload cluster INFO: Creating the workload cluster with name "k8s-upgrade-and-conformance-7gf7we" using the "upgrades" template (Kubernetes v1.22.9, 1 control-plane machines, 2 worker machines) INFO: Getting the cluster template yaml INFO: clusterctl config cluster k8s-upgrade-and-conformance-7gf7we --infrastructure (default) --kubernetes-version v1.22.9 --control-plane-machine-count 1 --worker-machine-count 2 --flavor upgrades INFO: Applying the cluster template yaml to the cluster configmap/cni-k8s-upgrade-and-conformance-7gf7we-crs-0 created clusterresourceset.addons.cluster.x-k8s.io/k8s-upgrade-and-conformance-7gf7we-crs-0 created kubeadmconfig.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-7gf7we-mp-0-config created kubeadmconfigtemplate.bootstrap.cluster.x-k8s.io/k8s-upgrade-and-conformance-7gf7we-md-0 created cluster.cluster.x-k8s.io/k8s-upgrade-and-conformance-7gf7we created machinedeployment.cluster.x-k8s.io/k8s-upgrade-and-conformance-7gf7we-md-0 created machinepool.cluster.x-k8s.io/k8s-upgrade-and-conformance-7gf7we-mp-0 created kubeadmcontrolplane.controlplane.cluster.x-k8s.io/k8s-upgrade-and-conformance-7gf7we-control-plane created dockercluster.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-7gf7we created dockermachinepool.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-7gf7we-dmp-0 created dockermachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-7gf7we-control-plane created dockermachinetemplate.infrastructure.cluster.x-k8s.io/k8s-upgrade-and-conformance-7gf7we-md-0 created INFO: Waiting for the cluster infrastructure to be provisioned STEP: Waiting for cluster to enter the provisioned phase INFO: Waiting for control plane to be initialized INFO: Waiting for the first control plane machine managed by k8s-upgrade-and-conformance-ee37ij/k8s-upgrade-and-conformance-7gf7we-control-plane to be provisioned STEP: Waiting for one control plane node to exist INFO: Waiting for control plane to be ready INFO: Waiting for control plane k8s-upgrade-and-conformance-ee37ij/k8s-upgrade-and-conformance-7gf7we-control-plane to be ready (implies underlying nodes to be ready as well) STEP: Waiting for the control plane to be ready INFO: Waiting for the machine deployments to be provisioned STEP: Waiting for the workload nodes to exist INFO: Waiting for the machine pools to be provisioned STEP: Waiting for the machine pool workload nodes STEP: Upgrading the Kubernetes control-plane INFO: Patching the new kubernetes version to KCP INFO: Waiting for control-plane machines to have the upgraded kubernetes version STEP: Ensuring all control-plane machines have upgraded kubernetes version v1.23.6 INFO: Waiting for kube-proxy to have the upgraded kubernetes version STEP: Ensuring kube-proxy has the correct image INFO: Waiting for CoreDNS to have the upgraded image tag STEP: Ensuring CoreDNS has the correct image INFO: Waiting for etcd to have the upgraded image tag STEP: Upgrading the machine deployment INFO: Patching the new kubernetes version to Machine Deployment k8s-upgrade-and-conformance-ee37ij/k8s-upgrade-and-conformance-7gf7we-md-0 INFO: Waiting for Kubernetes versions of machines in MachineDeployment k8s-upgrade-and-conformance-ee37ij/k8s-upgrade-and-conformance-7gf7we-md-0 to be upgraded from v1.22.9 to v1.23.6 INFO: Ensuring all MachineDeployment Machines have upgraded kubernetes version v1.23.6 STEP: Upgrading the machinepool instances INFO: Patching the new Kubernetes version to Machine Pool k8s-upgrade-and-conformance-ee37ij/k8s-upgrade-and-conformance-7gf7we-mp-0 INFO: Waiting for Kubernetes versions of machines in MachinePool k8s-upgrade-and-conformance-ee37ij/k8s-upgrade-and-conformance-7gf7we-mp-0 to be upgraded from v1.22.9 to v1.23.6 INFO: Ensuring all MachinePool Instances have upgraded kubernetes version v1.23.6 STEP: Waiting until nodes are ready STEP: Running conformance tests STEP: Running e2e test: dir=/home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e, command=["-nodes=4" "-slowSpecThreshold=120" "/usr/local/bin/e2e.test" "--" "--report-dir=/output" "--e2e-output-dir=/output/e2e-output" "--dump-logs-on-failure=false" "--report-prefix=kubetest." "--num-nodes=4" "--kubeconfig=/tmp/kubeconfig" "--provider=skeleton" "-ginkgo.skip=\\[Serial\\]" "-ginkgo.slowSpecThreshold=120" "-ginkgo.trace=true" "-ginkgo.v=true" "-disable-log-dump=true" "-ginkgo.flakeAttempts=3" "-ginkgo.focus=\\[Conformance\\]" "-ginkgo.progress=true"] Running Suite: Kubernetes e2e suite =================================== Random Seed: �[1m1650635145�[0m - Will randomize all specs Will run �[1m7044�[0m specs Running in parallel across �[1m4�[0m nodes Apr 22 13:45:49.225: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:45:49.226: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 22 13:45:49.238: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 22 13:45:49.263: INFO: 20 / 20 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 22 13:45:49.263: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 22 13:45:49.263: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 22 13:45:49.267: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 22 13:45:49.267: INFO: 5 / 5 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 22 13:45:49.267: INFO: e2e test version: v1.23.6 Apr 22 13:45:49.269: INFO: kube-apiserver version: v1.23.6 Apr 22 13:45:49.269: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:45:49.273: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Apr 22 13:45:49.276: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:45:49.291: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m Apr 22 13:45:49.299: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:45:49.315: INFO: Cluster IP family: ipv4 �[36mS�[0m �[90m------------------------------�[0m Apr 22 13:45:49.305: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:45:49.321: INFO: Cluster IP family: ipv4 �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:45:49.343: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts W0422 13:45:49.370957 19 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 13:45:49.371: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through the lifecycle of a ServiceAccount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ServiceAccount �[1mSTEP�[0m: watching for the ServiceAccount to be added �[1mSTEP�[0m: patching the ServiceAccount �[1mSTEP�[0m: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector) �[1mSTEP�[0m: deleting the ServiceAccount [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:45:49.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-8176" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":-1,"completed":1,"skipped":10,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:45:49.296: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc W0422 13:45:49.333221 17 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 13:45:49.333: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for all rs to be garbage collected �[1mSTEP�[0m: expected 0 rs, got 1 rs �[1mSTEP�[0m: expected 0 pods, got 2 pods �[1mSTEP�[0m: Gathering metrics Apr 22 13:45:50.409: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr is Running (Ready = true) Apr 22 13:45:50.533: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:45:50.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-8656" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:45:50.594: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:45:50.628: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace �[1mSTEP�[0m: Creating rc "condition-test" that asks for more than the allowed pod quota �[1mSTEP�[0m: Checking rc "condition-test" has the desired failure condition set �[1mSTEP�[0m: Scaling down rc "condition-test" to satisfy pod quota Apr 22 13:45:51.685: INFO: Updating replication controller "condition-test" �[1mSTEP�[0m: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:45:51.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-8963" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":-1,"completed":2,"skipped":32,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:45:49.332: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslicemirroring W0422 13:45:49.357800 21 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 13:45:49.358: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslicemirroring.go:39 [It] should mirror a custom Endpoints resource through create update and delete [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: mirroring a new custom Endpoint Apr 22 13:45:49.421: INFO: Waiting for at least 1 EndpointSlice to exist, got 0 �[1mSTEP�[0m: mirroring an update to a custom Endpoint Apr 22 13:45:51.437: INFO: Expected EndpointSlice to have 10.2.3.4 as address, got 10.1.2.3 �[1mSTEP�[0m: mirroring deletion of a custom Endpoint Apr 22 13:45:53.454: INFO: Waiting for 0 EndpointSlices to exist, got 1 [AfterEach] [sig-network] EndpointSliceMirroring /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:45:55.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslicemirroring-8122" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":-1,"completed":1,"skipped":27,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:45:49.583: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:45:49.621: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed" in namespace "downward-api-5588" to be "Succeeded or Failed" Apr 22 13:45:49.630: INFO: Pod "downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed": Phase="Pending", Reason="", readiness=false. Elapsed: 9.137228ms Apr 22 13:45:51.637: INFO: Pod "downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015864791s Apr 22 13:45:53.643: INFO: Pod "downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022145381s Apr 22 13:45:55.808: INFO: Pod "downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187330573s Apr 22 13:45:57.815: INFO: Pod "downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194467737s Apr 22 13:45:59.934: INFO: Pod "downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed": Phase="Pending", Reason="", readiness=false. Elapsed: 10.312930933s Apr 22 13:46:01.938: INFO: Pod "downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.317496676s �[1mSTEP�[0m: Saw pod success Apr 22 13:46:01.938: INFO: Pod "downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed" satisfied condition "Succeeded or Failed" Apr 22 13:46:01.941: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:46:01.981: INFO: Waiting for pod downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed to disappear Apr 22 13:46:01.984: INFO: Pod downwardapi-volume-7937aec7-3e43-418e-9a63-33053c33b2ed no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:01.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5588" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":2,"skipped":52,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:45:49.326: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi W0422 13:45:49.353197 16 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+ Apr 22 13:45:49.354: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: CRs in different groups (two CRDs) show up in OpenAPI documentation Apr 22 13:45:49.367: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:45:52.061: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:02.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-868" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":-1,"completed":1,"skipped":2,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:45:55.652: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod Apr 22 13:45:55.878: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:02.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-5062" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":-1,"completed":2,"skipped":55,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:02.001: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: Orphaning one of the Job's Pods Apr 22 13:46:04.556: INFO: Successfully updated pod "adopt-release-jf9gf" �[1mSTEP�[0m: Checking that the Job readopts the Pod Apr 22 13:46:04.556: INFO: Waiting up to 15m0s for pod "adopt-release-jf9gf" in namespace "job-3110" to be "adopted" Apr 22 13:46:04.560: INFO: Pod "adopt-release-jf9gf": Phase="Running", Reason="", readiness=true. Elapsed: 3.944251ms Apr 22 13:46:06.565: INFO: Pod "adopt-release-jf9gf": Phase="Running", Reason="", readiness=true. Elapsed: 2.008912867s Apr 22 13:46:06.565: INFO: Pod "adopt-release-jf9gf" satisfied condition "adopted" �[1mSTEP�[0m: Removing the labels from the Job's Pod Apr 22 13:46:07.077: INFO: Successfully updated pod "adopt-release-jf9gf" �[1mSTEP�[0m: Checking that the Job releases the Pod Apr 22 13:46:07.077: INFO: Waiting up to 15m0s for pod "adopt-release-jf9gf" in namespace "job-3110" to be "released" Apr 22 13:46:07.081: INFO: Pod "adopt-release-jf9gf": Phase="Running", Reason="", readiness=true. Elapsed: 3.919785ms Apr 22 13:46:09.085: INFO: Pod "adopt-release-jf9gf": Phase="Running", Reason="", readiness=true. Elapsed: 2.007768767s Apr 22 13:46:09.085: INFO: Pod "adopt-release-jf9gf" satisfied condition "released" [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:09.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-3110" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":-1,"completed":3,"skipped":57,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:02.939: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring job reaches completions [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:12.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-7553" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":-1,"completed":2,"skipped":82,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:12.999: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename endpointslice �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/endpointslice.go:49 [It] should support creating EndpointSlice API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/discovery.k8s.io �[1mSTEP�[0m: getting /apis/discovery.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 22 13:46:13.046: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Apr 22 13:46:13.051: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 22 13:46:13.065: INFO: waiting for watch events with expected annotations Apr 22 13:46:13.065: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] EndpointSlice /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:13.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "endpointslice-9855" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":-1,"completed":3,"skipped":91,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:09.100: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap configmap-4013/configmap-test-afc8e3d2-4c5b-4459-b0dd-015bbbeeec3e �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 13:46:09.135: INFO: Waiting up to 5m0s for pod "pod-configmaps-ba49b0d2-9ce3-4a17-963c-c055d3f5f4bb" in namespace "configmap-4013" to be "Succeeded or Failed" Apr 22 13:46:09.138: INFO: Pod "pod-configmaps-ba49b0d2-9ce3-4a17-963c-c055d3f5f4bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.823538ms Apr 22 13:46:11.142: INFO: Pod "pod-configmaps-ba49b0d2-9ce3-4a17-963c-c055d3f5f4bb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006953556s Apr 22 13:46:13.146: INFO: Pod "pod-configmaps-ba49b0d2-9ce3-4a17-963c-c055d3f5f4bb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010148953s �[1mSTEP�[0m: Saw pod success Apr 22 13:46:13.146: INFO: Pod "pod-configmaps-ba49b0d2-9ce3-4a17-963c-c055d3f5f4bb" satisfied condition "Succeeded or Failed" Apr 22 13:46:13.148: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod pod-configmaps-ba49b0d2-9ce3-4a17-963c-c055d3f5f4bb container env-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:46:13.177: INFO: Waiting for pod pod-configmaps-ba49b0d2-9ce3-4a17-963c-c055d3f5f4bb to disappear Apr 22 13:46:13.181: INFO: Pod pod-configmaps-ba49b0d2-9ce3-4a17-963c-c055d3f5f4bb no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:13.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4013" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":-1,"completed":4,"skipped":61,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:45:51.752: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-5634 �[1mSTEP�[0m: changing the ExternalName service to type=ClusterIP �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-5634 I0422 13:45:51.850331 17 runners.go:193] Created replication controller with name: externalname-service, namespace: services-5634, replica count: 2 I0422 13:45:54.904410 17 runners.go:193] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0422 13:45:57.904955 17 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 13:45:57.905: INFO: Creating new exec pod Apr 22 13:46:02.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5634 exec execpodd4n9b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 22 13:46:08.355: INFO: rc: 1 Apr 22 13:46:08.355: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5634 exec execpodd4n9b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80: Command stdout: stderr: + + echo hostName nc -v -t -w 2 externalname-service 80 nc: getaddrinfo: Try again command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 13:46:09.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5634 exec execpodd4n9b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 22 13:46:09.517: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 22 13:46:09.517: INFO: stdout: "" Apr 22 13:46:10.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5634 exec execpodd4n9b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 22 13:46:15.534: INFO: rc: 1 Apr 22 13:46:15.534: INFO: Service reachability failing with error: error running /usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5634 exec execpodd4n9b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80: Command stdout: stderr: + echo hostName + nc -v -t -w 2 externalname-service 80 nc: getaddrinfo: Try again command terminated with exit code 1 error: exit status 1 Retrying... Apr 22 13:46:16.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5634 exec execpodd4n9b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 22 13:46:16.514: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 22 13:46:16.515: INFO: stdout: "externalname-service-2rzgf" Apr 22 13:46:16.515: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-5634 exec execpodd4n9b -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.133.24.199 80' Apr 22 13:46:16.678: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.133.24.199 80\nConnection to 10.133.24.199 80 port [tcp/http] succeeded!\n" Apr 22 13:46:16.679: INFO: stdout: "externalname-service-2rzgf" Apr 22 13:46:16.679: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:16.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5634" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":-1,"completed":3,"skipped":52,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:13.196: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-2a9b05a4-80e1-4615-abba-8965bcfcd6af �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 13:46:13.229: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f9275792-fc96-481b-a160-d97b88c20a67" in namespace "projected-4648" to be "Succeeded or Failed" Apr 22 13:46:13.231: INFO: Pod "pod-projected-configmaps-f9275792-fc96-481b-a160-d97b88c20a67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.761555ms Apr 22 13:46:15.236: INFO: Pod "pod-projected-configmaps-f9275792-fc96-481b-a160-d97b88c20a67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00702797s Apr 22 13:46:17.241: INFO: Pod "pod-projected-configmaps-f9275792-fc96-481b-a160-d97b88c20a67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012260305s Apr 22 13:46:19.245: INFO: Pod "pod-projected-configmaps-f9275792-fc96-481b-a160-d97b88c20a67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016203961s �[1mSTEP�[0m: Saw pod success Apr 22 13:46:19.245: INFO: Pod "pod-projected-configmaps-f9275792-fc96-481b-a160-d97b88c20a67" satisfied condition "Succeeded or Failed" Apr 22 13:46:19.248: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-projected-configmaps-f9275792-fc96-481b-a160-d97b88c20a67 container projected-configmap-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:46:19.274: INFO: Waiting for pod pod-projected-configmaps-f9275792-fc96-481b-a160-d97b88c20a67 to disappear Apr 22 13:46:19.277: INFO: Pod pod-projected-configmaps-f9275792-fc96-481b-a160-d97b88c20a67 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:19.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4648" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":-1,"completed":5,"skipped":63,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:16.770: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4287.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4287.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4287.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4287.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;sleep 1; done �[1mSTEP�[0m: creating a pod to probe /etc/hosts �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 22 13:46:24.825: INFO: DNS probes using dns-4287/dns-test-d2205d21-0626-47d8-b125-0987f94baf1c succeeded �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:24.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-4287" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":-1,"completed":4,"skipped":83,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:24.880: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap that has name configmap-test-emptyKey-8ed7ef58-148f-4f4d-8580-d8c4101bb609 [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:24.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-1981" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":-1,"completed":5,"skipped":109,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:19.302: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:46:19.326: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:25.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-7437" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":-1,"completed":6,"skipped":73,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:24.957: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from NodePort to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service nodeport-service with the type=NodePort in namespace services-3938 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-3938 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-3938 I0422 13:46:25.064473 17 runners.go:193] Created replication controller with name: externalsvc, namespace: services-3938, replica count: 2 I0422 13:46:28.116681 17 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the NodePort service to type=ExternalName Apr 22 13:46:28.137: INFO: Creating new exec pod Apr 22 13:46:30.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-3938 exec execpodjpqfp -- /bin/sh -x -c nslookup nodeport-service.services-3938.svc.cluster.local' Apr 22 13:46:30.338: INFO: stderr: "+ nslookup nodeport-service.services-3938.svc.cluster.local\n" Apr 22 13:46:30.338: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nnodeport-service.services-3938.svc.cluster.local\tcanonical name = externalsvc.services-3938.svc.cluster.local.\nName:\texternalsvc.services-3938.svc.cluster.local\nAddress: 10.139.239.18\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-3938, will wait for the garbage collector to delete the pods Apr 22 13:46:30.397: INFO: Deleting ReplicationController externalsvc took: 5.041312ms Apr 22 13:46:30.497: INFO: Terminating ReplicationController externalsvc pods took: 100.563284ms Apr 22 13:46:32.121: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:32.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-3938" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":-1,"completed":6,"skipped":135,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:32.168: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/instrumentation/events.go:81 [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace �[1mSTEP�[0m: listing events with field selection filtering on source �[1mSTEP�[0m: listing events with field selection filtering on reportingController �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: updating the test event �[1mSTEP�[0m: getting the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing events in all namespaces �[1mSTEP�[0m: listing events in test namespace [AfterEach] [sig-instrumentation] Events API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:32.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-7683" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":7,"skipped":141,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:32.364: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 22 13:46:32.391: INFO: Waiting up to 5m0s for pod "security-context-e4f9a929-9631-4f3b-ae0a-21e2779a86d1" in namespace "security-context-8248" to be "Succeeded or Failed" Apr 22 13:46:32.394: INFO: Pod "security-context-e4f9a929-9631-4f3b-ae0a-21e2779a86d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.86265ms Apr 22 13:46:34.398: INFO: Pod "security-context-e4f9a929-9631-4f3b-ae0a-21e2779a86d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007211102s Apr 22 13:46:36.403: INFO: Pod "security-context-e4f9a929-9631-4f3b-ae0a-21e2779a86d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011923265s �[1mSTEP�[0m: Saw pod success Apr 22 13:46:36.403: INFO: Pod "security-context-e4f9a929-9631-4f3b-ae0a-21e2779a86d1" satisfied condition "Succeeded or Failed" Apr 22 13:46:36.406: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-kmb2d pod security-context-e4f9a929-9631-4f3b-ae0a-21e2779a86d1 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:46:36.430: INFO: Waiting for pod security-context-e4f9a929-9631-4f3b-ae0a-21e2779a86d1 to disappear Apr 22 13:46:36.433: INFO: Pod security-context-e4f9a929-9631-4f3b-ae0a-21e2779a86d1 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:36.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-8248" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":8,"skipped":184,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:36.540: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingress �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support creating Ingress API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 22 13:46:36.591: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Apr 22 13:46:36.596: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 22 13:46:36.609: INFO: waiting for watch events with expected annotations Apr 22 13:46:36.609: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] Ingress API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:36.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingress-7958" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":-1,"completed":9,"skipped":247,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:36.699: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: fetching the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io API group in the /apis discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document �[1mSTEP�[0m: fetching the /apis/apiextensions.k8s.io/v1 discovery document �[1mSTEP�[0m: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:36.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-3584" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":-1,"completed":10,"skipped":278,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:36.760: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-4394de56-35b3-428d-b50c-c8021bb72f51 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 13:46:36.798: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d01bf3ff-ee71-4982-a31e-e57a0f2a22a5" in namespace "projected-4835" to be "Succeeded or Failed" Apr 22 13:46:36.801: INFO: Pod "pod-projected-configmaps-d01bf3ff-ee71-4982-a31e-e57a0f2a22a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.008487ms Apr 22 13:46:38.805: INFO: Pod "pod-projected-configmaps-d01bf3ff-ee71-4982-a31e-e57a0f2a22a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006971452s Apr 22 13:46:40.809: INFO: Pod "pod-projected-configmaps-d01bf3ff-ee71-4982-a31e-e57a0f2a22a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010914849s �[1mSTEP�[0m: Saw pod success Apr 22 13:46:40.809: INFO: Pod "pod-projected-configmaps-d01bf3ff-ee71-4982-a31e-e57a0f2a22a5" satisfied condition "Succeeded or Failed" Apr 22 13:46:40.812: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-kmb2d pod pod-projected-configmaps-d01bf3ff-ee71-4982-a31e-e57a0f2a22a5 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:46:40.826: INFO: Waiting for pod pod-projected-configmaps-d01bf3ff-ee71-4982-a31e-e57a0f2a22a5 to disappear Apr 22 13:46:40.829: INFO: Pod pod-projected-configmaps-d01bf3ff-ee71-4982-a31e-e57a0f2a22a5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:40.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4835" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":11,"skipped":292,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:40.883: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-b6a7be63-f54c-4c3e-9746-a3c9b0c2a0f3 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 22 13:46:40.921: INFO: Waiting up to 5m0s for pod "pod-secrets-f28b6942-3cd8-4f61-85f7-796cda4f610f" in namespace "secrets-439" to be "Succeeded or Failed" Apr 22 13:46:40.926: INFO: Pod "pod-secrets-f28b6942-3cd8-4f61-85f7-796cda4f610f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.666136ms Apr 22 13:46:42.930: INFO: Pod "pod-secrets-f28b6942-3cd8-4f61-85f7-796cda4f610f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00860097s Apr 22 13:46:44.939: INFO: Pod "pod-secrets-f28b6942-3cd8-4f61-85f7-796cda4f610f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016753014s �[1mSTEP�[0m: Saw pod success Apr 22 13:46:44.939: INFO: Pod "pod-secrets-f28b6942-3cd8-4f61-85f7-796cda4f610f" satisfied condition "Succeeded or Failed" Apr 22 13:46:44.945: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-secrets-f28b6942-3cd8-4f61-85f7-796cda4f610f container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:46:44.961: INFO: Waiting for pod pod-secrets-f28b6942-3cd8-4f61-85f7-796cda4f610f to disappear Apr 22 13:46:44.964: INFO: Pod pod-secrets-f28b6942-3cd8-4f61-85f7-796cda4f610f no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:44.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-439" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":325,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:25.714: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:46:25.794: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:46:27.798: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Running (Ready = false) Apr 22 13:46:29.798: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Running (Ready = false) Apr 22 13:46:31.800: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Running (Ready = false) Apr 22 13:46:33.799: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Running (Ready = false) Apr 22 13:46:35.798: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Running (Ready = false) Apr 22 13:46:37.798: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Running (Ready = false) Apr 22 13:46:39.798: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Running (Ready = false) Apr 22 13:46:41.798: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Running (Ready = false) Apr 22 13:46:43.798: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Running (Ready = false) Apr 22 13:46:45.798: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Running (Ready = false) Apr 22 13:46:47.799: INFO: The status of Pod test-webserver-d9a8efc6-6caa-49c7-b6d7-e12b6e3121da is Running (Ready = true) Apr 22 13:46:47.802: INFO: Container started at 2022-04-22 13:46:26 +0000 UTC, pod became ready at 2022-04-22 13:46:45 +0000 UTC [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:47.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-6031" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":115,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:44.976: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:46:45.005: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ac398701-afe8-4062-ab58-adba6b54b036" in namespace "downward-api-9064" to be "Succeeded or Failed" Apr 22 13:46:45.008: INFO: Pod "downwardapi-volume-ac398701-afe8-4062-ab58-adba6b54b036": Phase="Pending", Reason="", readiness=false. Elapsed: 3.515941ms Apr 22 13:46:47.013: INFO: Pod "downwardapi-volume-ac398701-afe8-4062-ab58-adba6b54b036": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007887542s Apr 22 13:46:49.017: INFO: Pod "downwardapi-volume-ac398701-afe8-4062-ab58-adba6b54b036": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01207717s �[1mSTEP�[0m: Saw pod success Apr 22 13:46:49.017: INFO: Pod "downwardapi-volume-ac398701-afe8-4062-ab58-adba6b54b036" satisfied condition "Succeeded or Failed" Apr 22 13:46:49.020: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod downwardapi-volume-ac398701-afe8-4062-ab58-adba6b54b036 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:46:49.038: INFO: Waiting for pod downwardapi-volume-ac398701-afe8-4062-ab58-adba6b54b036 to disappear Apr 22 13:46:49.041: INFO: Pod downwardapi-volume-ac398701-afe8-4062-ab58-adba6b54b036 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:49.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-9064" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":326,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:47.819: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:46:47.843: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:50.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-3130" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":-1,"completed":8,"skipped":121,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:02.779: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:46:04.485: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:46:07.509: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API Apr 22 13:46:17.528: INFO: Waiting for webhook configuration to be ready... Apr 22 13:46:27.641: INFO: Waiting for webhook configuration to be ready... Apr 22 13:46:37.740: INFO: Waiting for webhook configuration to be ready... Apr 22 13:46:47.838: INFO: Waiting for webhook configuration to be ready... Apr 22 13:46:57.848: INFO: Waiting for webhook configuration to be ready... Apr 22 13:46:57.848: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002bc2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerValidatingWebhookForWebhookConfigurations(0xc000c23e40, {0xc0032d1cf8, 0x14}, 0xc0020ff450, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1339 +0x7ca k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.10() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:275 +0x73 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2456919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000525040, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:46:57.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-3064" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-3064-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [55.131 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould not be able to mutate or prevent deletion of webhook configuration objects [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 13:46:57.848: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002bc2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1339 �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:49.069: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should serve a basic endpoint from pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service endpoint-test2 in namespace services-6008 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6008 to expose endpoints map[] Apr 22 13:46:49.112: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found Apr 22 13:46:50.118: INFO: successfully validated that service endpoint-test2 in namespace services-6008 exposes endpoints map[] �[1mSTEP�[0m: Creating pod pod1 in namespace services-6008 Apr 22 13:46:50.129: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:46:52.135: INFO: The status of Pod pod1 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6008 to expose endpoints map[pod1:[80]] Apr 22 13:46:52.150: INFO: successfully validated that service endpoint-test2 in namespace services-6008 exposes endpoints map[pod1:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod1 Apr 22 13:46:52.150: INFO: Creating new exec pod Apr 22 13:46:55.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6008 exec execpodh5xlv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Apr 22 13:46:55.331: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Apr 22 13:46:55.331: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 13:46:55.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6008 exec execpodh5xlv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.129.82.162 80' Apr 22 13:46:55.481: INFO: stderr: "+ echo+ hostNamenc\n -v -t -w 2 10.129.82.162 80\nConnection to 10.129.82.162 80 port [tcp/http] succeeded!\n" Apr 22 13:46:55.481: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Creating pod pod2 in namespace services-6008 Apr 22 13:46:55.489: INFO: The status of Pod pod2 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:46:57.495: INFO: The status of Pod pod2 is Running (Ready = true) �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6008 to expose endpoints map[pod1:[80] pod2:[80]] Apr 22 13:46:57.508: INFO: successfully validated that service endpoint-test2 in namespace services-6008 exposes endpoints map[pod1:[80] pod2:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod1 and pod2 Apr 22 13:46:58.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6008 exec execpodh5xlv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Apr 22 13:46:58.710: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 endpoint-test2 80\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Apr 22 13:46:58.710: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 13:46:58.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6008 exec execpodh5xlv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.129.82.162 80' Apr 22 13:46:58.859: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.129.82.162 80\nConnection to 10.129.82.162 80 port [tcp/http] succeeded!\n" Apr 22 13:46:58.859: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod1 in namespace services-6008 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6008 to expose endpoints map[pod2:[80]] Apr 22 13:46:58.904: INFO: successfully validated that service endpoint-test2 in namespace services-6008 exposes endpoints map[pod2:[80]] �[1mSTEP�[0m: Checking if the Service forwards traffic to pod2 Apr 22 13:46:59.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6008 exec execpodh5xlv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 endpoint-test2 80' Apr 22 13:47:00.106: INFO: stderr: "+ nc -v -t -w 2 endpoint-test2+ 80\necho hostName\nConnection to endpoint-test2 80 port [tcp/http] succeeded!\n" Apr 22 13:47:00.106: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 13:47:00.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-6008 exec execpodh5xlv -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.129.82.162 80' Apr 22 13:47:00.255: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.129.82.162 80\nConnection to 10.129.82.162 80 port [tcp/http] succeeded!\n" Apr 22 13:47:00.255: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" �[1mSTEP�[0m: Deleting pod pod2 in namespace services-6008 �[1mSTEP�[0m: waiting up to 3m0s for service endpoint-test2 in namespace services-6008 to expose endpoints map[] Apr 22 13:47:01.277: INFO: successfully validated that service endpoint-test2 in namespace services-6008 exposes endpoints map[] [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:01.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-6008" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":-1,"completed":14,"skipped":340,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":2,"skipped":59,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:57.917: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:46:58.553: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:47:01.575: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API �[1mSTEP�[0m: Creating a dummy validating-webhook-configuration object �[1mSTEP�[0m: Deleting the validating-webhook-configuration, which should be possible to remove �[1mSTEP�[0m: Creating a dummy mutating-webhook-configuration object �[1mSTEP�[0m: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:01.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-2618" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-2618-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":-1,"completed":3,"skipped":59,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:13.119: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication Apr 22 13:46:13.547: INFO: role binding webhook-auth-reader already exists �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:46:13.565: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:46:16.588: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API Apr 22 13:46:26.606: INFO: Waiting for webhook configuration to be ready... Apr 22 13:46:36.720: INFO: Waiting for webhook configuration to be ready... Apr 22 13:46:46.819: INFO: Waiting for webhook configuration to be ready... Apr 22 13:46:56.916: INFO: Waiting for webhook configuration to be ready... Apr 22 13:47:06.934: INFO: Waiting for webhook configuration to be ready... Apr 22 13:47:06.935: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForAttachingPod(0xc000adc2c0, {0xc0047ee540, 0xc}, 0xc003a425f0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 +0x74a k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.5() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:207 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0004c4d00, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:06.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7400" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7400-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [53.957 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny attaching pod [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 13:47:06.935: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:939 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:01.739: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:47:01.758: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with known and required properties Apr 22 13:47:03.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 --namespace=crd-publish-openapi-2455 create -f -' Apr 22 13:47:04.791: INFO: stderr: "" Apr 22 13:47:04.791: INFO: stdout: "e2e-test-crd-publish-openapi-6276-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 22 13:47:04.791: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 --namespace=crd-publish-openapi-2455 delete e2e-test-crd-publish-openapi-6276-crds test-foo' Apr 22 13:47:04.876: INFO: stderr: "" Apr 22 13:47:04.876: INFO: stdout: "e2e-test-crd-publish-openapi-6276-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Apr 22 13:47:04.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 --namespace=crd-publish-openapi-2455 apply -f -' Apr 22 13:47:05.086: INFO: stderr: "" Apr 22 13:47:05.086: INFO: stdout: "e2e-test-crd-publish-openapi-6276-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Apr 22 13:47:05.086: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 --namespace=crd-publish-openapi-2455 delete e2e-test-crd-publish-openapi-6276-crds test-foo' Apr 22 13:47:05.162: INFO: stderr: "" Apr 22 13:47:05.162: INFO: stdout: "e2e-test-crd-publish-openapi-6276-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with value outside defined enum values Apr 22 13:47:05.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 --namespace=crd-publish-openapi-2455 create -f -' Apr 22 13:47:05.333: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Apr 22 13:47:05.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 --namespace=crd-publish-openapi-2455 create -f -' Apr 22 13:47:05.488: INFO: rc: 1 Apr 22 13:47:05.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 --namespace=crd-publish-openapi-2455 apply -f -' Apr 22 13:47:05.650: INFO: rc: 1 �[1mSTEP�[0m: client-side validation (kubectl create and apply) rejects request without required properties Apr 22 13:47:05.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 --namespace=crd-publish-openapi-2455 create -f -' Apr 22 13:47:05.811: INFO: rc: 1 Apr 22 13:47:05.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 --namespace=crd-publish-openapi-2455 apply -f -' Apr 22 13:47:05.976: INFO: rc: 1 �[1mSTEP�[0m: kubectl explain works to explain CR properties Apr 22 13:47:05.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 explain e2e-test-crd-publish-openapi-6276-crds' Apr 22 13:47:06.166: INFO: stderr: "" Apr 22 13:47:06.167: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6276-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t<string>\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t<string>\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t<Object>\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t<Object>\n Specification of Foo\n\n status\t<Object>\n Status of Foo\n\n" �[1mSTEP�[0m: kubectl explain works to explain CR properties recursively Apr 22 13:47:06.167: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 explain e2e-test-crd-publish-openapi-6276-crds.metadata' Apr 22 13:47:06.437: INFO: stderr: "" Apr 22 13:47:06.437: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6276-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata <Object>\n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t<map[string]string>\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t<string>\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t<string>\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC.\n\n Populated by the system. Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t<integer>\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t<string>\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested.\n\n Populated by the system when a graceful deletion is requested. Read-only.\n More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t<string>\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server.\n\n If this field is specified and the generated name exists, the server will\n NOT return a 409 - instead, it will either return 201 Created or 500 with\n Reason ServerTimeout indicating a unique name could not be found in the\n time allotted, and the client should retry (optionally after the time\n indicated in the Retry-After header).\n\n Applied only if Name is not specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t<integer>\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t<map[string]string>\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t<string>\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t<string>\n Namespace defines the space within which each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty.\n\n Must be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t<string>\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources.\n\n Populated by the system. Read-only. Value must be treated as opaque by\n clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t<string>\n SelfLink is a URL representing this object. Populated by the system.\n Read-only.\n\n DEPRECATED Kubernetes will stop propagating this field in 1.20 release and\n the field is planned to be removed in 1.21 release.\n\n uid\t<string>\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations.\n\n Populated by the system. Read-only. More info:\n http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Apr 22 13:47:06.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 explain e2e-test-crd-publish-openapi-6276-crds.spec' Apr 22 13:47:06.758: INFO: stderr: "" Apr 22 13:47:06.759: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6276-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Apr 22 13:47:06.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 explain e2e-test-crd-publish-openapi-6276-crds.spec.bars' Apr 22 13:47:07.116: INFO: stderr: "" Apr 22 13:47:07.116: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-6276-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t<string>\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n feeling\t<string>\n Whether Bar is feeling great.\n\n name\t<string> -required-\n Name of Bar.\n\n" �[1mSTEP�[0m: kubectl explain works to return error when explain is called on property that doesn't exist Apr 22 13:47:07.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-2455 explain e2e-test-crd-publish-openapi-6276-crds.spec.bars2' Apr 22 13:47:07.535: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:10.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-2455" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":-1,"completed":4,"skipped":86,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:46:51.094: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-6706 [It] should validate Statefulset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating statefulset ss in namespace statefulset-6706 Apr 22 13:46:51.142: INFO: Found 0 stateful pods, waiting for 1 Apr 22 13:47:01.148: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Patch Statefulset to include a label �[1mSTEP�[0m: Getting /status Apr 22 13:47:01.170: INFO: StatefulSet ss has Conditions: []v1.StatefulSetCondition(nil) �[1mSTEP�[0m: updating the StatefulSet Status Apr 22 13:47:01.184: INFO: updatedStatus.Conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the statefulset status to be updated Apr 22 13:47:01.194: INFO: Observed &StatefulSet event: ADDED Apr 22 13:47:01.194: INFO: Found Statefulset ss in namespace statefulset-6706 with labels: map[e2e:testing] annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Apr 22 13:47:01.194: INFO: Statefulset ss has an updated status �[1mSTEP�[0m: patching the Statefulset Status Apr 22 13:47:01.194: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Apr 22 13:47:01.206: INFO: Patched status conditions: []v1.StatefulSetCondition{v1.StatefulSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Statefulset status to be patched Apr 22 13:47:01.213: INFO: Observed &StatefulSet event: ADDED [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 22 13:47:01.213: INFO: Deleting all statefulset in ns statefulset-6706 Apr 22 13:47:01.217: INFO: Scaling statefulset ss to 0 Apr 22 13:47:11.245: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 13:47:11.281: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:11.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-6706" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should validate Statefulset Status endpoints [Conformance]","total":-1,"completed":9,"skipped":152,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:10.820: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename custom-resource-definition �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:47:10.858: INFO: >>> kubeConfig: /tmp/kubeconfig [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:11.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "custom-resource-definition-7679" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":-1,"completed":5,"skipped":103,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:12.412: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a watch on configmaps �[1mSTEP�[0m: creating a new configmap �[1mSTEP�[0m: modifying the configmap once �[1mSTEP�[0m: closing the watch once it receives two notifications Apr 22 13:47:12.531: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5480 2b65c321-63d0-4ad4-b3b5-f1454d2daa7c 3746 0 2022-04-22 13:47:12 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-22 13:47:12 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 13:47:12.531: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5480 2b65c321-63d0-4ad4-b3b5-f1454d2daa7c 3748 0 2022-04-22 13:47:12 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-22 13:47:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying the configmap a second time, while the watch is closed �[1mSTEP�[0m: creating a new watch on configmaps from the last resource version observed by the first watch �[1mSTEP�[0m: deleting the configmap �[1mSTEP�[0m: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 22 13:47:12.545: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5480 2b65c321-63d0-4ad4-b3b5-f1454d2daa7c 3749 0 2022-04-22 13:47:12 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-22 13:47:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 13:47:12.545: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-5480 2b65c321-63d0-4ad4-b3b5-f1454d2daa7c 3750 0 2022-04-22 13:47:12 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2022-04-22 13:47:12 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:12.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-5480" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":-1,"completed":6,"skipped":305,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":3,"skipped":107,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:07.079: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:47:08.146: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:47:11.176: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod �[1mSTEP�[0m: 'kubectl attach' the pod, should be denied by the webhook Apr 22 13:47:13.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=webhook-1053 attach --namespace=webhook-1053 to-be-attached-pod -i -c=container1' Apr 22 13:47:13.400: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:13.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-1053" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-1053-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":-1,"completed":4,"skipped":107,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:11.446: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:47:11.524: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e797b3da-67fb-47ed-a262-248d8205d9c7", Controller:(*bool)(0xc0047f72d6), BlockOwnerDeletion:(*bool)(0xc0047f72d7)}} Apr 22 13:47:11.551: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"10b9d3ad-af36-496d-82a2-8cb196b73bf3", Controller:(*bool)(0xc0047f750a), BlockOwnerDeletion:(*bool)(0xc0047f750b)}} Apr 22 13:47:11.561: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"c09cadd4-8966-40c2-acde-de8a74f2ecdf", Controller:(*bool)(0xc0047f77d6), BlockOwnerDeletion:(*bool)(0xc0047f77d7)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:16.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-9243" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":-1,"completed":10,"skipped":182,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:16.715: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename ingressclass �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/ingressclass.go:186 [It] should support creating IngressClass API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/networking.k8s.io �[1mSTEP�[0m: getting /apis/networking.k8s.iov1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 22 13:47:16.850: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 22 13:47:16.871: INFO: waiting for watch events with expected annotations Apr 22 13:47:16.871: INFO: saw patched and updated annotations �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-network] IngressClass API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:16.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "ingressclass-7387" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] IngressClass API should support creating IngressClass API operations [Conformance]","total":-1,"completed":11,"skipped":210,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:12.571: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:47:14.397: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:47:17.425: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:17.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9121" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9121-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":-1,"completed":7,"skipped":316,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:13.704: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should block an eviction until the PDB is updated to allow it [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pdb that targets all three pods in a test replica set �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: First trying to evict a pod which shouldn't be evictable �[1mSTEP�[0m: Waiting for all pods to be running Apr 22 13:47:15.990: INFO: pods: 0 < 3 Apr 22 13:47:17.995: INFO: running pods: 0 < 3 �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Updating the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running �[1mSTEP�[0m: Waiting for the pdb to observed all healthy pods �[1mSTEP�[0m: Patching the pdb to disallow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for all pods to be running �[1mSTEP�[0m: locating a running pod �[1mSTEP�[0m: Deleting the pdb to allow a pod to be evicted �[1mSTEP�[0m: Waiting for the pdb to be deleted �[1mSTEP�[0m: Trying to evict the same pod we tried earlier which should now be evictable �[1mSTEP�[0m: Waiting for all pods to be running [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:22.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-9923" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it [Conformance]","total":-1,"completed":5,"skipped":110,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:22.188: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:47:22.206: INFO: Creating deployment "test-recreate-deployment" Apr 22 13:47:22.214: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 22 13:47:22.227: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 22 13:47:24.238: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 22 13:47:24.243: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 22 13:47:24.251: INFO: Updating deployment test-recreate-deployment Apr 22 13:47:24.251: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 22 13:47:24.347: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-6429 b899ddd6-e841-4961-88fb-6bae392b72ba 4764 2 2022-04-22 13:47:22 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-22 13:47:24 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 13:47:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:replicas":{},"f:unavailableReplicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0040b1738 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2022-04-22 13:47:24 +0000 UTC,LastTransitionTime:2022-04-22 13:47:24 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5b99bd5487" is progressing.,LastUpdateTime:2022-04-22 13:47:24 +0000 UTC,LastTransitionTime:2022-04-22 13:47:22 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} Apr 22 13:47:24.354: INFO: New ReplicaSet "test-recreate-deployment-5b99bd5487" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5b99bd5487 deployment-6429 8d0fd829-7222-40cc-bfc3-485c2cede81d 4756 1 2022-04-22 13:47:24 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment b899ddd6-e841-4961-88fb-6bae392b72ba 0xc005abb6c7 0xc005abb6c8}] [] [{kube-controller-manager Update apps/v1 2022-04-22 13:47:24 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b899ddd6-e841-4961-88fb-6bae392b72ba\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 13:47:24 +0000 UTC FieldsV1 {"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5b99bd5487,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005abb768 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 13:47:24.354: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 22 13:47:24.354: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-7d659f7dc9 deployment-6429 3be87f54-e9f6-4931-a2e5-55585928afbd 4747 2 2022-04-22 13:47:22 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:7d659f7dc9] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment b899ddd6-e841-4961-88fb-6bae392b72ba 0xc005abb7c7 0xc005abb7c8}] [] [{kube-controller-manager Update apps/v1 2022-04-22 13:47:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b899ddd6-e841-4961-88fb-6bae392b72ba\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 13:47:24 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 7d659f7dc9,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:7d659f7dc9] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc005abb878 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 13:47:24.360: INFO: Pod "test-recreate-deployment-5b99bd5487-s7psr" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5b99bd5487-s7psr test-recreate-deployment-5b99bd5487- deployment-6429 1b264f4f-d83b-430f-a11b-c7f0ee19f4f4 4760 0 2022-04-22 13:47:24 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:5b99bd5487] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5b99bd5487 8d0fd829-7222-40cc-bfc3-485c2cede81d 0xc004046927 0xc004046928}] [] [{kube-controller-manager Update v1 2022-04-22 13:47:24 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"8d0fd829-7222-40cc-bfc3-485c2cede81d\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-22 13:47:24 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-sp6wj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sp6wj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:47:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:47:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:47:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:47:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:,StartTime:2022-04-22 13:47:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:24.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-6429" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":6,"skipped":146,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:17.866: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:47:17.899: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 22 13:47:20.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5894 --namespace=crd-publish-openapi-5894 create -f -' Apr 22 13:47:21.335: INFO: stderr: "" Apr 22 13:47:21.335: INFO: stdout: "e2e-test-crd-publish-openapi-574-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 22 13:47:21.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5894 --namespace=crd-publish-openapi-5894 delete e2e-test-crd-publish-openapi-574-crds test-cr' Apr 22 13:47:21.418: INFO: stderr: "" Apr 22 13:47:21.418: INFO: stdout: "e2e-test-crd-publish-openapi-574-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Apr 22 13:47:21.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5894 --namespace=crd-publish-openapi-5894 apply -f -' Apr 22 13:47:21.745: INFO: stderr: "" Apr 22 13:47:21.745: INFO: stdout: "e2e-test-crd-publish-openapi-574-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Apr 22 13:47:21.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5894 --namespace=crd-publish-openapi-5894 delete e2e-test-crd-publish-openapi-574-crds test-cr' Apr 22 13:47:21.844: INFO: stderr: "" Apr 22 13:47:21.844: INFO: stdout: "e2e-test-crd-publish-openapi-574-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR Apr 22 13:47:21.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-5894 explain e2e-test-crd-publish-openapi-574-crds' Apr 22 13:47:22.068: INFO: stderr: "" Apr 22 13:47:22.068: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-574-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:24.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-5894" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":-1,"completed":8,"skipped":354,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:24.524: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Getting a ResourceQuota �[1mSTEP�[0m: Updating a ResourceQuota �[1mSTEP�[0m: Verifying a ResourceQuota was modified �[1mSTEP�[0m: Deleting a ResourceQuota �[1mSTEP�[0m: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:24.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-9022" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":-1,"completed":9,"skipped":356,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:17.109: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svc-latency �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not be very high [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:47:17.144: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating replication controller svc-latency-rc in namespace svc-latency-7745 I0422 13:47:17.164850 19 runners.go:193] Created replication controller with name: svc-latency-rc, namespace: svc-latency-7745, replica count: 1 I0422 13:47:18.217820 19 runners.go:193] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 13:47:18.330: INFO: Created: latency-svc-hq9r9 Apr 22 13:47:18.335: INFO: Got endpoints: latency-svc-hq9r9 [17.246661ms] Apr 22 13:47:18.350: INFO: Created: latency-svc-p59qw Apr 22 13:47:18.357: INFO: Got endpoints: latency-svc-p59qw [21.127912ms] Apr 22 13:47:18.460: INFO: Created: latency-svc-6l6ff Apr 22 13:47:18.469: INFO: Got endpoints: latency-svc-6l6ff [133.024044ms] Apr 22 13:47:18.471: INFO: Created: latency-svc-jdxw9 Apr 22 13:47:18.471: INFO: Created: latency-svc-5cq94 Apr 22 13:47:18.471: INFO: Created: latency-svc-k9mz7 Apr 22 13:47:18.471: INFO: Created: latency-svc-cnrwf Apr 22 13:47:18.472: INFO: Created: latency-svc-tdwvl Apr 22 13:47:18.473: INFO: Created: latency-svc-vh6rk Apr 22 13:47:18.474: INFO: Created: latency-svc-sv7lr Apr 22 13:47:18.474: INFO: Created: latency-svc-8jfk8 Apr 22 13:47:18.474: INFO: Created: latency-svc-n6wqt Apr 22 13:47:18.474: INFO: Created: latency-svc-xjqnp Apr 22 13:47:18.476: INFO: Created: latency-svc-f4mxk Apr 22 13:47:18.476: INFO: Created: latency-svc-5q6qd Apr 22 13:47:18.476: INFO: Created: latency-svc-d9zj2 Apr 22 13:47:18.476: INFO: Created: latency-svc-jdzjs Apr 22 13:47:18.493: INFO: Got endpoints: latency-svc-xjqnp [157.20945ms] Apr 22 13:47:18.495: INFO: Created: latency-svc-qn4gc Apr 22 13:47:18.517: INFO: Got endpoints: latency-svc-jdzjs [160.680033ms] Apr 22 13:47:18.518: INFO: Got endpoints: latency-svc-tdwvl [181.279833ms] Apr 22 13:47:18.518: INFO: Got endpoints: latency-svc-n6wqt [182.602482ms] Apr 22 13:47:18.523: INFO: Got endpoints: latency-svc-cnrwf [186.777313ms] Apr 22 13:47:18.524: INFO: Got endpoints: latency-svc-vh6rk [187.284089ms] Apr 22 13:47:18.528: INFO: Created: latency-svc-sr7v9 Apr 22 13:47:18.533: INFO: Got endpoints: latency-svc-sv7lr [196.650891ms] Apr 22 13:47:18.536: INFO: Got endpoints: latency-svc-d9zj2 [200.167099ms] Apr 22 13:47:18.536: INFO: Got endpoints: latency-svc-k9mz7 [200.371506ms] Apr 22 13:47:18.536: INFO: Got endpoints: latency-svc-f4mxk [200.019104ms] Apr 22 13:47:18.536: INFO: Got endpoints: latency-svc-8jfk8 [200.377073ms] Apr 22 13:47:18.544: INFO: Got endpoints: latency-svc-jdxw9 [208.133013ms] Apr 22 13:47:18.565: INFO: Got endpoints: latency-svc-5cq94 [228.20736ms] Apr 22 13:47:18.565: INFO: Got endpoints: latency-svc-qn4gc [95.56604ms] Apr 22 13:47:18.567: INFO: Got endpoints: latency-svc-sr7v9 [73.560025ms] Apr 22 13:47:18.568: INFO: Got endpoints: latency-svc-5q6qd [231.512761ms] Apr 22 13:47:18.742: INFO: Created: latency-svc-xwlh4 Apr 22 13:47:18.742: INFO: Created: latency-svc-hml82 Apr 22 13:47:18.742: INFO: Created: latency-svc-6gtrf Apr 22 13:47:18.755: INFO: Created: latency-svc-ppbmx Apr 22 13:47:18.755: INFO: Got endpoints: latency-svc-xwlh4 [218.722535ms] Apr 22 13:47:18.755: INFO: Created: latency-svc-24nct Apr 22 13:47:18.756: INFO: Created: latency-svc-btg8t Apr 22 13:47:18.756: INFO: Created: latency-svc-pszg9 Apr 22 13:47:18.755: INFO: Created: latency-svc-vn8gb Apr 22 13:47:18.756: INFO: Created: latency-svc-wplzc Apr 22 13:47:18.756: INFO: Created: latency-svc-vqtkb Apr 22 13:47:18.756: INFO: Created: latency-svc-jfm49 Apr 22 13:47:18.756: INFO: Created: latency-svc-8q8h9 Apr 22 13:47:18.756: INFO: Created: latency-svc-w2z4q Apr 22 13:47:18.756: INFO: Created: latency-svc-xwz2x Apr 22 13:47:18.760: INFO: Created: latency-svc-wrx4d Apr 22 13:47:18.760: INFO: Got endpoints: latency-svc-wrx4d [192.641754ms] Apr 22 13:47:18.766: INFO: Got endpoints: latency-svc-hml82 [247.881458ms] Apr 22 13:47:18.766: INFO: Got endpoints: latency-svc-24nct [248.334545ms] Apr 22 13:47:18.766: INFO: Got endpoints: latency-svc-pszg9 [243.00574ms] Apr 22 13:47:18.771: INFO: Got endpoints: latency-svc-xwz2x [234.872297ms] Apr 22 13:47:18.775: INFO: Got endpoints: latency-svc-wplzc [210.128963ms] Apr 22 13:47:18.775: INFO: Got endpoints: latency-svc-vn8gb [210.364502ms] Apr 22 13:47:18.778: INFO: Got endpoints: latency-svc-btg8t [242.095967ms] Apr 22 13:47:18.779: INFO: Got endpoints: latency-svc-ppbmx [246.073839ms] Apr 22 13:47:18.778: INFO: Got endpoints: latency-svc-8q8h9 [211.472285ms] Apr 22 13:47:18.783: INFO: Got endpoints: latency-svc-w2z4q [266.075303ms] Apr 22 13:47:18.786: INFO: Created: latency-svc-tnj54 Apr 22 13:47:18.786: INFO: Got endpoints: latency-svc-vqtkb [250.188991ms] Apr 22 13:47:18.790: INFO: Got endpoints: latency-svc-jfm49 [266.207302ms] Apr 22 13:47:18.795: INFO: Got endpoints: latency-svc-6gtrf [250.347077ms] Apr 22 13:47:18.797: INFO: Got endpoints: latency-svc-tnj54 [41.942171ms] Apr 22 13:47:18.802: INFO: Created: latency-svc-l2tt4 Apr 22 13:47:18.806: INFO: Got endpoints: latency-svc-l2tt4 [45.825018ms] Apr 22 13:47:18.816: INFO: Created: latency-svc-p8snw Apr 22 13:47:18.824: INFO: Got endpoints: latency-svc-p8snw [57.972274ms] Apr 22 13:47:18.827: INFO: Created: latency-svc-bvvrd Apr 22 13:47:18.833: INFO: Got endpoints: latency-svc-bvvrd [66.449762ms] Apr 22 13:47:18.848: INFO: Created: latency-svc-skvnj Apr 22 13:47:18.855: INFO: Got endpoints: latency-svc-skvnj [89.196347ms] Apr 22 13:47:18.862: INFO: Created: latency-svc-bcrvj Apr 22 13:47:18.866: INFO: Got endpoints: latency-svc-bcrvj [94.835634ms] Apr 22 13:47:18.872: INFO: Created: latency-svc-ngvdb Apr 22 13:47:18.881: INFO: Created: latency-svc-ncv4d Apr 22 13:47:18.888: INFO: Created: latency-svc-65w7p Apr 22 13:47:18.897: INFO: Created: latency-svc-w8mnl Apr 22 13:47:18.907: INFO: Created: latency-svc-lxw56 Apr 22 13:47:18.915: INFO: Created: latency-svc-8ddjb Apr 22 13:47:18.916: INFO: Got endpoints: latency-svc-ngvdb [140.900399ms] Apr 22 13:47:18.927: INFO: Created: latency-svc-vpzk2 Apr 22 13:47:18.934: INFO: Created: latency-svc-pzd6p Apr 22 13:47:18.959: INFO: Created: latency-svc-xmbdj Apr 22 13:47:18.966: INFO: Created: latency-svc-4xv5s Apr 22 13:47:18.968: INFO: Got endpoints: latency-svc-ncv4d [192.481093ms] Apr 22 13:47:18.981: INFO: Created: latency-svc-v7svs Apr 22 13:47:18.990: INFO: Created: latency-svc-44hww Apr 22 13:47:19.009: INFO: Created: latency-svc-xnbng Apr 22 13:47:19.014: INFO: Got endpoints: latency-svc-65w7p [234.901109ms] Apr 22 13:47:19.025: INFO: Created: latency-svc-vsmdj Apr 22 13:47:19.033: INFO: Created: latency-svc-7jrrm Apr 22 13:47:19.046: INFO: Created: latency-svc-9gwrv Apr 22 13:47:19.055: INFO: Created: latency-svc-6ggzx Apr 22 13:47:19.070: INFO: Got endpoints: latency-svc-w8mnl [291.16825ms] Apr 22 13:47:19.078: INFO: Created: latency-svc-gh89s Apr 22 13:47:19.094: INFO: Created: latency-svc-zhcgp Apr 22 13:47:19.114: INFO: Got endpoints: latency-svc-lxw56 [335.427959ms] Apr 22 13:47:19.133: INFO: Created: latency-svc-h9gpf Apr 22 13:47:19.172: INFO: Got endpoints: latency-svc-8ddjb [388.268068ms] Apr 22 13:47:19.189: INFO: Created: latency-svc-zjlfc Apr 22 13:47:19.218: INFO: Got endpoints: latency-svc-vpzk2 [432.061543ms] Apr 22 13:47:19.253: INFO: Created: latency-svc-q6tl6 Apr 22 13:47:19.265: INFO: Got endpoints: latency-svc-pzd6p [474.891801ms] Apr 22 13:47:19.285: INFO: Created: latency-svc-vt4c2 Apr 22 13:47:19.314: INFO: Got endpoints: latency-svc-xmbdj [516.437528ms] Apr 22 13:47:19.329: INFO: Created: latency-svc-54kml Apr 22 13:47:19.363: INFO: Got endpoints: latency-svc-4xv5s [568.519859ms] Apr 22 13:47:19.377: INFO: Created: latency-svc-qsnqq Apr 22 13:47:19.417: INFO: Got endpoints: latency-svc-v7svs [610.315662ms] Apr 22 13:47:19.433: INFO: Created: latency-svc-k5jvd Apr 22 13:47:19.466: INFO: Got endpoints: latency-svc-44hww [642.399131ms] Apr 22 13:47:19.501: INFO: Created: latency-svc-d9p2k Apr 22 13:47:19.517: INFO: Got endpoints: latency-svc-xnbng [683.46662ms] Apr 22 13:47:19.535: INFO: Created: latency-svc-zjqjl Apr 22 13:47:19.563: INFO: Got endpoints: latency-svc-vsmdj [708.043532ms] Apr 22 13:47:19.577: INFO: Created: latency-svc-5dbh2 Apr 22 13:47:19.616: INFO: Got endpoints: latency-svc-7jrrm [750.28007ms] Apr 22 13:47:19.631: INFO: Created: latency-svc-bpnhm Apr 22 13:47:19.663: INFO: Got endpoints: latency-svc-9gwrv [747.203364ms] Apr 22 13:47:19.678: INFO: Created: latency-svc-thgd6 Apr 22 13:47:19.717: INFO: Got endpoints: latency-svc-6ggzx [748.902652ms] Apr 22 13:47:19.728: INFO: Created: latency-svc-2d5gl Apr 22 13:47:19.765: INFO: Got endpoints: latency-svc-gh89s [751.36209ms] Apr 22 13:47:19.776: INFO: Created: latency-svc-wbrks Apr 22 13:47:19.816: INFO: Got endpoints: latency-svc-zhcgp [746.145023ms] Apr 22 13:47:19.827: INFO: Created: latency-svc-rbbsw Apr 22 13:47:19.866: INFO: Got endpoints: latency-svc-h9gpf [751.522333ms] Apr 22 13:47:19.876: INFO: Created: latency-svc-j44k8 Apr 22 13:47:19.921: INFO: Got endpoints: latency-svc-zjlfc [749.054695ms] Apr 22 13:47:19.936: INFO: Created: latency-svc-zvvlb Apr 22 13:47:19.965: INFO: Got endpoints: latency-svc-q6tl6 [745.61683ms] Apr 22 13:47:19.976: INFO: Created: latency-svc-fmptr Apr 22 13:47:20.013: INFO: Got endpoints: latency-svc-vt4c2 [748.157127ms] Apr 22 13:47:20.029: INFO: Created: latency-svc-f9pvj Apr 22 13:47:20.064: INFO: Got endpoints: latency-svc-54kml [749.870544ms] Apr 22 13:47:20.077: INFO: Created: latency-svc-bjmd8 Apr 22 13:47:20.114: INFO: Got endpoints: latency-svc-qsnqq [750.455044ms] Apr 22 13:47:20.134: INFO: Created: latency-svc-4qfdj Apr 22 13:47:20.164: INFO: Got endpoints: latency-svc-k5jvd [746.993454ms] Apr 22 13:47:20.178: INFO: Created: latency-svc-htpxg Apr 22 13:47:20.215: INFO: Got endpoints: latency-svc-d9p2k [748.169371ms] Apr 22 13:47:20.227: INFO: Created: latency-svc-kgtvc Apr 22 13:47:20.267: INFO: Got endpoints: latency-svc-zjqjl [750.017691ms] Apr 22 13:47:20.277: INFO: Created: latency-svc-2bb84 Apr 22 13:47:20.314: INFO: Got endpoints: latency-svc-5dbh2 [747.365833ms] Apr 22 13:47:20.329: INFO: Created: latency-svc-qj7bw Apr 22 13:47:20.365: INFO: Got endpoints: latency-svc-bpnhm [748.960473ms] Apr 22 13:47:20.376: INFO: Created: latency-svc-fwkn8 Apr 22 13:47:20.418: INFO: Got endpoints: latency-svc-thgd6 [754.236496ms] Apr 22 13:47:20.435: INFO: Created: latency-svc-l4f8g Apr 22 13:47:20.470: INFO: Got endpoints: latency-svc-2d5gl [753.082775ms] Apr 22 13:47:20.487: INFO: Created: latency-svc-mgt57 Apr 22 13:47:20.517: INFO: Got endpoints: latency-svc-wbrks [751.532089ms] Apr 22 13:47:20.541: INFO: Created: latency-svc-9ts57 Apr 22 13:47:20.571: INFO: Got endpoints: latency-svc-rbbsw [755.285023ms] Apr 22 13:47:20.585: INFO: Created: latency-svc-9ltkp Apr 22 13:47:20.615: INFO: Got endpoints: latency-svc-j44k8 [748.606156ms] Apr 22 13:47:20.629: INFO: Created: latency-svc-7nt47 Apr 22 13:47:20.667: INFO: Got endpoints: latency-svc-zvvlb [745.819596ms] Apr 22 13:47:20.680: INFO: Created: latency-svc-kvs74 Apr 22 13:47:20.714: INFO: Got endpoints: latency-svc-fmptr [749.588584ms] Apr 22 13:47:20.727: INFO: Created: latency-svc-5dlm6 Apr 22 13:47:20.763: INFO: Got endpoints: latency-svc-f9pvj [749.932547ms] Apr 22 13:47:20.778: INFO: Created: latency-svc-8x6hz Apr 22 13:47:20.813: INFO: Got endpoints: latency-svc-bjmd8 [748.630728ms] Apr 22 13:47:20.828: INFO: Created: latency-svc-k9mtd Apr 22 13:47:20.866: INFO: Got endpoints: latency-svc-4qfdj [752.49651ms] Apr 22 13:47:20.908: INFO: Created: latency-svc-wtj6b Apr 22 13:47:20.922: INFO: Got endpoints: latency-svc-htpxg [757.139511ms] Apr 22 13:47:20.936: INFO: Created: latency-svc-xqq4q Apr 22 13:47:20.966: INFO: Got endpoints: latency-svc-kgtvc [751.182217ms] Apr 22 13:47:20.989: INFO: Created: latency-svc-bdwgv Apr 22 13:47:21.014: INFO: Got endpoints: latency-svc-2bb84 [747.720586ms] Apr 22 13:47:21.028: INFO: Created: latency-svc-5fc2j Apr 22 13:47:21.063: INFO: Got endpoints: latency-svc-qj7bw [749.383425ms] Apr 22 13:47:21.078: INFO: Created: latency-svc-ctnt7 Apr 22 13:47:21.118: INFO: Got endpoints: latency-svc-fwkn8 [752.248397ms] Apr 22 13:47:21.130: INFO: Created: latency-svc-fw8hp Apr 22 13:47:21.166: INFO: Got endpoints: latency-svc-l4f8g [747.990772ms] Apr 22 13:47:21.184: INFO: Created: latency-svc-fjmj5 Apr 22 13:47:21.213: INFO: Got endpoints: latency-svc-mgt57 [742.882231ms] Apr 22 13:47:21.223: INFO: Created: latency-svc-fbrvn Apr 22 13:47:21.263: INFO: Got endpoints: latency-svc-9ts57 [745.809383ms] Apr 22 13:47:21.272: INFO: Created: latency-svc-lbnfb Apr 22 13:47:21.313: INFO: Got endpoints: latency-svc-9ltkp [742.068266ms] Apr 22 13:47:21.328: INFO: Created: latency-svc-htvj7 Apr 22 13:47:21.368: INFO: Got endpoints: latency-svc-7nt47 [752.040538ms] Apr 22 13:47:21.381: INFO: Created: latency-svc-r8v75 Apr 22 13:47:21.416: INFO: Got endpoints: latency-svc-kvs74 [749.288953ms] Apr 22 13:47:21.433: INFO: Created: latency-svc-8gzrt Apr 22 13:47:21.477: INFO: Got endpoints: latency-svc-5dlm6 [762.719602ms] Apr 22 13:47:21.501: INFO: Created: latency-svc-wbh84 Apr 22 13:47:21.527: INFO: Got endpoints: latency-svc-8x6hz [763.83333ms] Apr 22 13:47:21.547: INFO: Created: latency-svc-h8sht Apr 22 13:47:21.581: INFO: Got endpoints: latency-svc-k9mtd [767.560611ms] Apr 22 13:47:21.604: INFO: Created: latency-svc-l7dtm Apr 22 13:47:21.615: INFO: Got endpoints: latency-svc-wtj6b [748.56498ms] Apr 22 13:47:21.640: INFO: Created: latency-svc-hr7dm Apr 22 13:47:21.670: INFO: Got endpoints: latency-svc-xqq4q [748.218151ms] Apr 22 13:47:21.687: INFO: Created: latency-svc-9b92t Apr 22 13:47:21.716: INFO: Got endpoints: latency-svc-bdwgv [749.884167ms] Apr 22 13:47:21.733: INFO: Created: latency-svc-rzs8t Apr 22 13:47:21.768: INFO: Got endpoints: latency-svc-5fc2j [753.396896ms] Apr 22 13:47:21.778: INFO: Created: latency-svc-mm69n Apr 22 13:47:21.815: INFO: Got endpoints: latency-svc-ctnt7 [752.095497ms] Apr 22 13:47:21.828: INFO: Created: latency-svc-tx7tj Apr 22 13:47:21.866: INFO: Got endpoints: latency-svc-fw8hp [747.658553ms] Apr 22 13:47:21.877: INFO: Created: latency-svc-rmjwp Apr 22 13:47:21.914: INFO: Got endpoints: latency-svc-fjmj5 [747.81101ms] Apr 22 13:47:21.925: INFO: Created: latency-svc-92xrx Apr 22 13:47:21.967: INFO: Got endpoints: latency-svc-fbrvn [754.169231ms] Apr 22 13:47:21.980: INFO: Created: latency-svc-l2s52 Apr 22 13:47:22.024: INFO: Got endpoints: latency-svc-lbnfb [761.508535ms] Apr 22 13:47:22.038: INFO: Created: latency-svc-vrpfx Apr 22 13:47:22.065: INFO: Got endpoints: latency-svc-htvj7 [751.557065ms] Apr 22 13:47:22.086: INFO: Created: latency-svc-22w4n Apr 22 13:47:22.115: INFO: Got endpoints: latency-svc-r8v75 [746.993949ms] Apr 22 13:47:22.137: INFO: Created: latency-svc-6m9r6 Apr 22 13:47:22.167: INFO: Got endpoints: latency-svc-8gzrt [750.291843ms] Apr 22 13:47:22.179: INFO: Created: latency-svc-qthdh Apr 22 13:47:22.218: INFO: Got endpoints: latency-svc-wbh84 [740.43166ms] Apr 22 13:47:22.242: INFO: Created: latency-svc-qlbrx Apr 22 13:47:22.267: INFO: Got endpoints: latency-svc-h8sht [739.790091ms] Apr 22 13:47:22.282: INFO: Created: latency-svc-5hbww Apr 22 13:47:22.315: INFO: Got endpoints: latency-svc-l7dtm [733.709896ms] Apr 22 13:47:22.333: INFO: Created: latency-svc-8l67j Apr 22 13:47:22.363: INFO: Got endpoints: latency-svc-hr7dm [747.664051ms] Apr 22 13:47:22.376: INFO: Created: latency-svc-jjhfr Apr 22 13:47:22.416: INFO: Got endpoints: latency-svc-9b92t [746.153824ms] Apr 22 13:47:22.427: INFO: Created: latency-svc-tld5w Apr 22 13:47:22.466: INFO: Got endpoints: latency-svc-rzs8t [749.908341ms] Apr 22 13:47:22.487: INFO: Created: latency-svc-mtfp2 Apr 22 13:47:22.518: INFO: Got endpoints: latency-svc-mm69n [750.335117ms] Apr 22 13:47:22.534: INFO: Created: latency-svc-9w7m9 Apr 22 13:47:22.564: INFO: Got endpoints: latency-svc-tx7tj [748.733351ms] Apr 22 13:47:22.575: INFO: Created: latency-svc-648qm Apr 22 13:47:22.613: INFO: Got endpoints: latency-svc-rmjwp [747.595465ms] Apr 22 13:47:22.625: INFO: Created: latency-svc-jklzr Apr 22 13:47:22.666: INFO: Got endpoints: latency-svc-92xrx [751.835785ms] Apr 22 13:47:22.679: INFO: Created: latency-svc-bcthf Apr 22 13:47:22.714: INFO: Got endpoints: latency-svc-l2s52 [747.281101ms] Apr 22 13:47:22.727: INFO: Created: latency-svc-8kd95 Apr 22 13:47:22.763: INFO: Got endpoints: latency-svc-vrpfx [738.952747ms] Apr 22 13:47:22.777: INFO: Created: latency-svc-r8vlz Apr 22 13:47:22.813: INFO: Got endpoints: latency-svc-22w4n [748.131893ms] Apr 22 13:47:22.828: INFO: Created: latency-svc-pn5sg Apr 22 13:47:22.864: INFO: Got endpoints: latency-svc-6m9r6 [749.131351ms] Apr 22 13:47:22.876: INFO: Created: latency-svc-bfnh7 Apr 22 13:47:22.914: INFO: Got endpoints: latency-svc-qthdh [747.258479ms] Apr 22 13:47:22.929: INFO: Created: latency-svc-f872c Apr 22 13:47:22.966: INFO: Got endpoints: latency-svc-qlbrx [747.898091ms] Apr 22 13:47:22.977: INFO: Created: latency-svc-4gms8 Apr 22 13:47:23.013: INFO: Got endpoints: latency-svc-5hbww [745.404189ms] Apr 22 13:47:23.024: INFO: Created: latency-svc-zlh68 Apr 22 13:47:23.063: INFO: Got endpoints: latency-svc-8l67j [748.670988ms] Apr 22 13:47:23.078: INFO: Created: latency-svc-95hr9 Apr 22 13:47:23.123: INFO: Got endpoints: latency-svc-jjhfr [759.59305ms] Apr 22 13:47:23.147: INFO: Created: latency-svc-n7rkb Apr 22 13:47:23.163: INFO: Got endpoints: latency-svc-tld5w [746.319813ms] Apr 22 13:47:23.174: INFO: Created: latency-svc-m2rp8 Apr 22 13:47:23.213: INFO: Got endpoints: latency-svc-mtfp2 [747.417611ms] Apr 22 13:47:23.230: INFO: Created: latency-svc-7tkzp Apr 22 13:47:23.264: INFO: Got endpoints: latency-svc-9w7m9 [745.839618ms] Apr 22 13:47:23.285: INFO: Created: latency-svc-8xsv2 Apr 22 13:47:23.314: INFO: Got endpoints: latency-svc-648qm [749.563641ms] Apr 22 13:47:23.328: INFO: Created: latency-svc-lzb4r Apr 22 13:47:23.363: INFO: Got endpoints: latency-svc-jklzr [750.095139ms] Apr 22 13:47:23.376: INFO: Created: latency-svc-czpf6 Apr 22 13:47:23.426: INFO: Got endpoints: latency-svc-bcthf [760.110286ms] Apr 22 13:47:23.439: INFO: Created: latency-svc-ztsbj Apr 22 13:47:23.464: INFO: Got endpoints: latency-svc-8kd95 [749.724065ms] Apr 22 13:47:23.487: INFO: Created: latency-svc-6lkdf Apr 22 13:47:23.519: INFO: Got endpoints: latency-svc-r8vlz [754.477143ms] Apr 22 13:47:23.545: INFO: Created: latency-svc-hkg57 Apr 22 13:47:23.567: INFO: Got endpoints: latency-svc-pn5sg [753.962224ms] Apr 22 13:47:23.584: INFO: Created: latency-svc-zds6j Apr 22 13:47:23.614: INFO: Got endpoints: latency-svc-bfnh7 [749.640198ms] Apr 22 13:47:23.623: INFO: Created: latency-svc-tj778 Apr 22 13:47:23.664: INFO: Got endpoints: latency-svc-f872c [749.851887ms] Apr 22 13:47:23.675: INFO: Created: latency-svc-7xgjz Apr 22 13:47:23.713: INFO: Got endpoints: latency-svc-4gms8 [747.643172ms] Apr 22 13:47:23.725: INFO: Created: latency-svc-5lnvn Apr 22 13:47:23.764: INFO: Got endpoints: latency-svc-zlh68 [751.140734ms] Apr 22 13:47:23.775: INFO: Created: latency-svc-gp4zq Apr 22 13:47:23.813: INFO: Got endpoints: latency-svc-95hr9 [750.061821ms] Apr 22 13:47:23.833: INFO: Created: latency-svc-t9kl8 Apr 22 13:47:23.866: INFO: Got endpoints: latency-svc-n7rkb [743.05997ms] Apr 22 13:47:23.876: INFO: Created: latency-svc-6ncwr Apr 22 13:47:23.915: INFO: Got endpoints: latency-svc-m2rp8 [751.948743ms] Apr 22 13:47:23.925: INFO: Created: latency-svc-6hwhq Apr 22 13:47:23.969: INFO: Got endpoints: latency-svc-7tkzp [755.147984ms] Apr 22 13:47:23.982: INFO: Created: latency-svc-stl5d Apr 22 13:47:24.014: INFO: Got endpoints: latency-svc-8xsv2 [749.421755ms] Apr 22 13:47:24.024: INFO: Created: latency-svc-hknj9 Apr 22 13:47:24.065: INFO: Got endpoints: latency-svc-lzb4r [751.12242ms] Apr 22 13:47:24.079: INFO: Created: latency-svc-kd5mz Apr 22 13:47:24.117: INFO: Got endpoints: latency-svc-czpf6 [753.810316ms] Apr 22 13:47:24.131: INFO: Created: latency-svc-dwnmx Apr 22 13:47:24.164: INFO: Got endpoints: latency-svc-ztsbj [737.731419ms] Apr 22 13:47:24.180: INFO: Created: latency-svc-sntz2 Apr 22 13:47:24.212: INFO: Got endpoints: latency-svc-6lkdf [747.982057ms] Apr 22 13:47:24.225: INFO: Created: latency-svc-5dkvb Apr 22 13:47:24.269: INFO: Got endpoints: latency-svc-hkg57 [750.541432ms] Apr 22 13:47:24.285: INFO: Created: latency-svc-zmb64 Apr 22 13:47:24.315: INFO: Got endpoints: latency-svc-zds6j [747.822462ms] Apr 22 13:47:24.336: INFO: Created: latency-svc-6m8rl Apr 22 13:47:24.367: INFO: Got endpoints: latency-svc-tj778 [752.533999ms] Apr 22 13:47:24.378: INFO: Created: latency-svc-tclvm Apr 22 13:47:24.416: INFO: Got endpoints: latency-svc-7xgjz [752.247554ms] Apr 22 13:47:24.435: INFO: Created: latency-svc-2jrjs Apr 22 13:47:24.471: INFO: Got endpoints: latency-svc-5lnvn [757.125271ms] Apr 22 13:47:24.488: INFO: Created: latency-svc-vbrj9 Apr 22 13:47:24.516: INFO: Got endpoints: latency-svc-gp4zq [751.968601ms] Apr 22 13:47:24.540: INFO: Created: latency-svc-8jwgp Apr 22 13:47:24.579: INFO: Got endpoints: latency-svc-t9kl8 [765.193098ms] Apr 22 13:47:24.597: INFO: Created: latency-svc-lzrrx Apr 22 13:47:24.615: INFO: Got endpoints: latency-svc-6ncwr [749.560858ms] Apr 22 13:47:24.633: INFO: Created: latency-svc-ftshw Apr 22 13:47:24.666: INFO: Got endpoints: latency-svc-6hwhq [750.923236ms] Apr 22 13:47:24.682: INFO: Created: latency-svc-gww5t Apr 22 13:47:24.718: INFO: Got endpoints: latency-svc-stl5d [749.705055ms] Apr 22 13:47:24.731: INFO: Created: latency-svc-7bfwp Apr 22 13:47:24.763: INFO: Got endpoints: latency-svc-hknj9 [749.383088ms] Apr 22 13:47:24.775: INFO: Created: latency-svc-7ksdb Apr 22 13:47:24.815: INFO: Got endpoints: latency-svc-kd5mz [749.796323ms] Apr 22 13:47:24.836: INFO: Created: latency-svc-rrr94 Apr 22 13:47:24.865: INFO: Got endpoints: latency-svc-dwnmx [747.734081ms] Apr 22 13:47:24.878: INFO: Created: latency-svc-g7k84 Apr 22 13:47:24.914: INFO: Got endpoints: latency-svc-sntz2 [750.440411ms] Apr 22 13:47:24.930: INFO: Created: latency-svc-qmr5r Apr 22 13:47:24.967: INFO: Got endpoints: latency-svc-5dkvb [754.253054ms] Apr 22 13:47:24.978: INFO: Created: latency-svc-rv6rp Apr 22 13:47:25.015: INFO: Got endpoints: latency-svc-zmb64 [745.904524ms] Apr 22 13:47:25.037: INFO: Created: latency-svc-pzbm7 Apr 22 13:47:25.064: INFO: Got endpoints: latency-svc-6m8rl [748.807829ms] Apr 22 13:47:25.079: INFO: Created: latency-svc-vjxdx Apr 22 13:47:25.114: INFO: Got endpoints: latency-svc-tclvm [747.683653ms] Apr 22 13:47:25.131: INFO: Created: latency-svc-gdzsx Apr 22 13:47:25.166: INFO: Got endpoints: latency-svc-2jrjs [749.572061ms] Apr 22 13:47:25.182: INFO: Created: latency-svc-w2djg Apr 22 13:47:25.225: INFO: Got endpoints: latency-svc-vbrj9 [754.582907ms] Apr 22 13:47:25.259: INFO: Created: latency-svc-cs49m Apr 22 13:47:25.269: INFO: Got endpoints: latency-svc-8jwgp [752.81571ms] Apr 22 13:47:25.289: INFO: Created: latency-svc-8ltjr Apr 22 13:47:25.314: INFO: Got endpoints: latency-svc-lzrrx [735.185122ms] Apr 22 13:47:25.328: INFO: Created: latency-svc-z7nrf Apr 22 13:47:25.364: INFO: Got endpoints: latency-svc-ftshw [748.120814ms] Apr 22 13:47:25.379: INFO: Created: latency-svc-mhfpc Apr 22 13:47:25.415: INFO: Got endpoints: latency-svc-gww5t [749.842242ms] Apr 22 13:47:25.432: INFO: Created: latency-svc-k8kvt Apr 22 13:47:25.478: INFO: Got endpoints: latency-svc-7bfwp [759.111231ms] Apr 22 13:47:25.518: INFO: Created: latency-svc-24mn4 Apr 22 13:47:25.521: INFO: Got endpoints: latency-svc-7ksdb [758.121326ms] Apr 22 13:47:25.542: INFO: Created: latency-svc-8698r Apr 22 13:47:25.571: INFO: Got endpoints: latency-svc-rrr94 [756.176809ms] Apr 22 13:47:25.597: INFO: Created: latency-svc-sdscg Apr 22 13:47:25.616: INFO: Got endpoints: latency-svc-g7k84 [751.126455ms] Apr 22 13:47:25.638: INFO: Created: latency-svc-gmfcp Apr 22 13:47:25.665: INFO: Got endpoints: latency-svc-qmr5r [748.963301ms] Apr 22 13:47:25.684: INFO: Created: latency-svc-56f6t Apr 22 13:47:25.714: INFO: Got endpoints: latency-svc-rv6rp [747.518954ms] Apr 22 13:47:25.732: INFO: Created: latency-svc-j7gzb Apr 22 13:47:25.765: INFO: Got endpoints: latency-svc-pzbm7 [749.419561ms] Apr 22 13:47:25.778: INFO: Created: latency-svc-5gnk2 Apr 22 13:47:25.817: INFO: Got endpoints: latency-svc-vjxdx [752.687028ms] Apr 22 13:47:25.829: INFO: Created: latency-svc-6pxl7 Apr 22 13:47:25.864: INFO: Got endpoints: latency-svc-gdzsx [750.080346ms] Apr 22 13:47:25.882: INFO: Created: latency-svc-h95x5 Apr 22 13:47:25.913: INFO: Got endpoints: latency-svc-w2djg [747.189914ms] Apr 22 13:47:25.928: INFO: Created: latency-svc-fbww5 Apr 22 13:47:25.963: INFO: Got endpoints: latency-svc-cs49m [737.566248ms] Apr 22 13:47:25.973: INFO: Created: latency-svc-tpsns Apr 22 13:47:26.016: INFO: Got endpoints: latency-svc-8ltjr [747.694645ms] Apr 22 13:47:26.026: INFO: Created: latency-svc-nrd2c Apr 22 13:47:26.063: INFO: Got endpoints: latency-svc-z7nrf [749.388154ms] Apr 22 13:47:26.078: INFO: Created: latency-svc-lcpz9 Apr 22 13:47:26.121: INFO: Got endpoints: latency-svc-mhfpc [757.298931ms] Apr 22 13:47:26.136: INFO: Created: latency-svc-hf7lx Apr 22 13:47:26.165: INFO: Got endpoints: latency-svc-k8kvt [749.075987ms] Apr 22 13:47:26.175: INFO: Created: latency-svc-jfp8s Apr 22 13:47:26.219: INFO: Got endpoints: latency-svc-24mn4 [740.827715ms] Apr 22 13:47:26.264: INFO: Got endpoints: latency-svc-8698r [742.377431ms] Apr 22 13:47:26.317: INFO: Got endpoints: latency-svc-sdscg [745.666561ms] Apr 22 13:47:26.365: INFO: Got endpoints: latency-svc-gmfcp [749.010903ms] Apr 22 13:47:26.413: INFO: Got endpoints: latency-svc-56f6t [747.870082ms] Apr 22 13:47:26.464: INFO: Got endpoints: latency-svc-j7gzb [749.522233ms] Apr 22 13:47:26.521: INFO: Got endpoints: latency-svc-5gnk2 [756.65662ms] Apr 22 13:47:26.568: INFO: Got endpoints: latency-svc-6pxl7 [750.613515ms] Apr 22 13:47:26.617: INFO: Got endpoints: latency-svc-h95x5 [752.157311ms] Apr 22 13:47:26.663: INFO: Got endpoints: latency-svc-fbww5 [750.149579ms] Apr 22 13:47:26.713: INFO: Got endpoints: latency-svc-tpsns [749.939425ms] Apr 22 13:47:26.763: INFO: Got endpoints: latency-svc-nrd2c [746.916342ms] Apr 22 13:47:26.813: INFO: Got endpoints: latency-svc-lcpz9 [749.1526ms] Apr 22 13:47:26.863: INFO: Got endpoints: latency-svc-hf7lx [742.171343ms] Apr 22 13:47:26.914: INFO: Got endpoints: latency-svc-jfp8s [749.502793ms] Apr 22 13:47:26.914: INFO: Latencies: [21.127912ms 41.942171ms 45.825018ms 57.972274ms 66.449762ms 73.560025ms 89.196347ms 94.835634ms 95.56604ms 133.024044ms 140.900399ms 157.20945ms 160.680033ms 181.279833ms 182.602482ms 186.777313ms 187.284089ms 192.481093ms 192.641754ms 196.650891ms 200.019104ms 200.167099ms 200.371506ms 200.377073ms 208.133013ms 210.128963ms 210.364502ms 211.472285ms 218.722535ms 228.20736ms 231.512761ms 234.872297ms 234.901109ms 242.095967ms 243.00574ms 246.073839ms 247.881458ms 248.334545ms 250.188991ms 250.347077ms 266.075303ms 266.207302ms 291.16825ms 335.427959ms 388.268068ms 432.061543ms 474.891801ms 516.437528ms 568.519859ms 610.315662ms 642.399131ms 683.46662ms 708.043532ms 733.709896ms 735.185122ms 737.566248ms 737.731419ms 738.952747ms 739.790091ms 740.43166ms 740.827715ms 742.068266ms 742.171343ms 742.377431ms 742.882231ms 743.05997ms 745.404189ms 745.61683ms 745.666561ms 745.809383ms 745.819596ms 745.839618ms 745.904524ms 746.145023ms 746.153824ms 746.319813ms 746.916342ms 746.993454ms 746.993949ms 747.189914ms 747.203364ms 747.258479ms 747.281101ms 747.365833ms 747.417611ms 747.518954ms 747.595465ms 747.643172ms 747.658553ms 747.664051ms 747.683653ms 747.694645ms 747.720586ms 747.734081ms 747.81101ms 747.822462ms 747.870082ms 747.898091ms 747.982057ms 747.990772ms 748.120814ms 748.131893ms 748.157127ms 748.169371ms 748.218151ms 748.56498ms 748.606156ms 748.630728ms 748.670988ms 748.733351ms 748.807829ms 748.902652ms 748.960473ms 748.963301ms 749.010903ms 749.054695ms 749.075987ms 749.131351ms 749.1526ms 749.288953ms 749.383088ms 749.383425ms 749.388154ms 749.419561ms 749.421755ms 749.502793ms 749.522233ms 749.560858ms 749.563641ms 749.572061ms 749.588584ms 749.640198ms 749.705055ms 749.724065ms 749.796323ms 749.842242ms 749.851887ms 749.870544ms 749.884167ms 749.908341ms 749.932547ms 749.939425ms 750.017691ms 750.061821ms 750.080346ms 750.095139ms 750.149579ms 750.28007ms 750.291843ms 750.335117ms 750.440411ms 750.455044ms 750.541432ms 750.613515ms 750.923236ms 751.12242ms 751.126455ms 751.140734ms 751.182217ms 751.36209ms 751.522333ms 751.532089ms 751.557065ms 751.835785ms 751.948743ms 751.968601ms 752.040538ms 752.095497ms 752.157311ms 752.247554ms 752.248397ms 752.49651ms 752.533999ms 752.687028ms 752.81571ms 753.082775ms 753.396896ms 753.810316ms 753.962224ms 754.169231ms 754.236496ms 754.253054ms 754.477143ms 754.582907ms 755.147984ms 755.285023ms 756.176809ms 756.65662ms 757.125271ms 757.139511ms 757.298931ms 758.121326ms 759.111231ms 759.59305ms 760.110286ms 761.508535ms 762.719602ms 763.83333ms 765.193098ms 767.560611ms] Apr 22 13:47:26.914: INFO: 50 %ile: 748.120814ms Apr 22 13:47:26.914: INFO: 90 %ile: 754.236496ms Apr 22 13:47:26.914: INFO: 99 %ile: 765.193098ms Apr 22 13:47:26.914: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:26.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svc-latency-7745" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:24.672: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test pod.Spec.SecurityContext.RunAsUser Apr 22 13:47:24.754: INFO: Waiting up to 5m0s for pod "security-context-594cdaae-5eda-437d-a339-2403616a9248" in namespace "security-context-3360" to be "Succeeded or Failed" Apr 22 13:47:24.757: INFO: Pod "security-context-594cdaae-5eda-437d-a339-2403616a9248": Phase="Pending", Reason="", readiness=false. Elapsed: 3.409393ms Apr 22 13:47:26.762: INFO: Pod "security-context-594cdaae-5eda-437d-a339-2403616a9248": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00836748s Apr 22 13:47:28.767: INFO: Pod "security-context-594cdaae-5eda-437d-a339-2403616a9248": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013082407s �[1mSTEP�[0m: Saw pod success Apr 22 13:47:28.767: INFO: Pod "security-context-594cdaae-5eda-437d-a339-2403616a9248" satisfied condition "Succeeded or Failed" Apr 22 13:47:28.770: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod security-context-594cdaae-5eda-437d-a339-2403616a9248 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:47:28.784: INFO: Waiting for pod security-context-594cdaae-5eda-437d-a339-2403616a9248 to disappear Apr 22 13:47:28.786: INFO: Pod security-context-594cdaae-5eda-437d-a339-2403616a9248 no longer exists [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:28.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-3360" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":-1,"completed":10,"skipped":370,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:28.798: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run through a ConfigMap lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a ConfigMap �[1mSTEP�[0m: fetching the ConfigMap �[1mSTEP�[0m: patching the ConfigMap �[1mSTEP�[0m: listing all ConfigMaps in all namespaces with a label selector �[1mSTEP�[0m: deleting the ConfigMap by collection with a label selector �[1mSTEP�[0m: listing all ConfigMaps in test namespace [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:28.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-973" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":-1,"completed":11,"skipped":370,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":-1,"completed":12,"skipped":248,"failed":0} [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:26.927: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:47:26.960: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5c504be6-06b3-436f-a4c2-fbe29d1b43ef" in namespace "downward-api-8220" to be "Succeeded or Failed" Apr 22 13:47:26.964: INFO: Pod "downwardapi-volume-5c504be6-06b3-436f-a4c2-fbe29d1b43ef": Phase="Pending", Reason="", readiness=false. Elapsed: 3.107618ms Apr 22 13:47:28.969: INFO: Pod "downwardapi-volume-5c504be6-06b3-436f-a4c2-fbe29d1b43ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008681734s Apr 22 13:47:30.973: INFO: Pod "downwardapi-volume-5c504be6-06b3-436f-a4c2-fbe29d1b43ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012312873s �[1mSTEP�[0m: Saw pod success Apr 22 13:47:30.973: INFO: Pod "downwardapi-volume-5c504be6-06b3-436f-a4c2-fbe29d1b43ef" satisfied condition "Succeeded or Failed" Apr 22 13:47:30.975: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-kmb2d pod downwardapi-volume-5c504be6-06b3-436f-a4c2-fbe29d1b43ef container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:47:30.990: INFO: Waiting for pod downwardapi-volume-5c504be6-06b3-436f-a4c2-fbe29d1b43ef to disappear Apr 22 13:47:30.992: INFO: Pod downwardapi-volume-5c504be6-06b3-436f-a4c2-fbe29d1b43ef no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:30.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-8220" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":248,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:28.905: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-7cca1675-603e-46a0-a1c9-ef824d28de1b �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-8aa98492-007f-4c43-8d43-a08225afcd75 �[1mSTEP�[0m: Creating the pod Apr 22 13:47:28.948: INFO: The status of Pod pod-secrets-c25b6de7-af95-44b4-8767-4bb44b5c3e3e is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:47:30.952: INFO: The status of Pod pod-secrets-c25b6de7-af95-44b4-8767-4bb44b5c3e3e is Running (Ready = true) �[1mSTEP�[0m: Deleting secret s-test-opt-del-7cca1675-603e-46a0-a1c9-ef824d28de1b �[1mSTEP�[0m: Updating secret s-test-opt-upd-8aa98492-007f-4c43-8d43-a08225afcd75 �[1mSTEP�[0m: Creating secret with name s-test-opt-create-c9a100ad-ec02-4cd7-9ce5-18cee4cfb6e7 �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:33.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1113" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":12,"skipped":392,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:31.006: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:47:31.027: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Apr 22 13:47:31.039: INFO: The status of Pod pod-logs-websocket-6cc85a2e-084c-4c5f-a8bb-8ec6a653c8ed is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:47:33.045: INFO: The status of Pod pod-logs-websocket-6cc85a2e-084c-4c5f-a8bb-8ec6a653c8ed is Running (Ready = true) [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:33.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-9670" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":252,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:33.073: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the deployment �[1mSTEP�[0m: Wait for the Deployment to create new ReplicaSet �[1mSTEP�[0m: delete the deployment �[1mSTEP�[0m: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs �[1mSTEP�[0m: Gathering metrics Apr 22 13:47:34.215: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr is Running (Ready = true) Apr 22 13:47:34.397: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:34.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-9695" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":-1,"completed":13,"skipped":407,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:01.319: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename job �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a job [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a job �[1mSTEP�[0m: Ensuring active pods == parallelism �[1mSTEP�[0m: delete a job �[1mSTEP�[0m: deleting Job.batch foo in namespace job-3989, will wait for the garbage collector to delete the pods Apr 22 13:47:03.424: INFO: Deleting Job.batch foo took: 11.819717ms Apr 22 13:47:03.525: INFO: Terminating Job.batch foo pods took: 101.073777ms �[1mSTEP�[0m: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:36.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "job-3989" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":-1,"completed":15,"skipped":347,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:33.130: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-5ec9debc-abcb-473b-aded-1a03faa32fea �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-6893b9fa-286e-4ff5-9b87-584f5beb91e8 �[1mSTEP�[0m: Creating the pod Apr 22 13:47:33.231: INFO: The status of Pod pod-projected-configmaps-00faa662-d7a0-460b-b9c9-ac0a53d469a7 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:47:35.237: INFO: The status of Pod pod-projected-configmaps-00faa662-d7a0-460b-b9c9-ac0a53d469a7 is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-5ec9debc-abcb-473b-aded-1a03faa32fea �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-6893b9fa-286e-4ff5-9b87-584f5beb91e8 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-d475cf24-92ef-449b-8a00-e900761aa0bb �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:37.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5376" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":15,"skipped":275,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:34.442: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Apr 22 13:47:34.485: INFO: Waiting up to 5m0s for pod "downward-api-08dc9e44-2ae3-40f0-b30d-944d47948130" in namespace "downward-api-7108" to be "Succeeded or Failed" Apr 22 13:47:34.489: INFO: Pod "downward-api-08dc9e44-2ae3-40f0-b30d-944d47948130": Phase="Pending", Reason="", readiness=false. Elapsed: 3.613839ms Apr 22 13:47:36.493: INFO: Pod "downward-api-08dc9e44-2ae3-40f0-b30d-944d47948130": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008022611s Apr 22 13:47:38.499: INFO: Pod "downward-api-08dc9e44-2ae3-40f0-b30d-944d47948130": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013571596s �[1mSTEP�[0m: Saw pod success Apr 22 13:47:38.499: INFO: Pod "downward-api-08dc9e44-2ae3-40f0-b30d-944d47948130" satisfied condition "Succeeded or Failed" Apr 22 13:47:38.502: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod downward-api-08dc9e44-2ae3-40f0-b30d-944d47948130 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:47:38.515: INFO: Waiting for pod downward-api-08dc9e44-2ae3-40f0-b30d-944d47948130 to disappear Apr 22 13:47:38.519: INFO: Pod downward-api-08dc9e44-2ae3-40f0-b30d-944d47948130 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:38.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-7108" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":416,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:38.532: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should patch a secret [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a secret �[1mSTEP�[0m: listing secrets in all namespaces to ensure that there are more than zero �[1mSTEP�[0m: patching the secret �[1mSTEP�[0m: deleting the secret using a LabelSelector �[1mSTEP�[0m: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:38.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-4059" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":-1,"completed":15,"skipped":417,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:36.258: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should create services for rc [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating Agnhost RC Apr 22 13:47:36.277: INFO: namespace kubectl-6673 Apr 22 13:47:36.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6673 create -f -' Apr 22 13:47:36.580: INFO: stderr: "" Apr 22 13:47:36.580: INFO: stdout: "replicationcontroller/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Apr 22 13:47:37.584: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 13:47:37.584: INFO: Found 0 / 1 Apr 22 13:47:38.586: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 13:47:38.586: INFO: Found 1 / 1 Apr 22 13:47:38.586: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 22 13:47:38.589: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 13:47:38.589: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 22 13:47:38.589: INFO: wait on agnhost-primary startup in kubectl-6673 Apr 22 13:47:38.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6673 logs agnhost-primary-spsv2 agnhost-primary' Apr 22 13:47:38.720: INFO: stderr: "" Apr 22 13:47:38.720: INFO: stdout: "Paused\n" �[1mSTEP�[0m: exposing RC Apr 22 13:47:38.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6673 expose rc agnhost-primary --name=rm2 --port=1234 --target-port=6379' Apr 22 13:47:38.828: INFO: stderr: "" Apr 22 13:47:38.828: INFO: stdout: "service/rm2 exposed\n" Apr 22 13:47:38.831: INFO: Service rm2 in namespace kubectl-6673 found. Apr 22 13:47:40.835: INFO: Get endpoints failed (interval 2s): endpoints "rm2" not found Apr 22 13:47:42.836: INFO: Get endpoints failed (interval 2s): endpoints "rm2" not found Apr 22 13:47:44.835: INFO: Get endpoints failed (interval 2s): endpoints "rm2" not found �[1mSTEP�[0m: exposing service Apr 22 13:47:46.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-6673 expose service rm2 --name=rm3 --port=2345 --target-port=6379' Apr 22 13:47:46.927: INFO: stderr: "" Apr 22 13:47:46.927: INFO: stdout: "service/rm3 exposed\n" Apr 22 13:47:46.934: INFO: Service rm3 in namespace kubectl-6673 found. [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:48.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-6673" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":-1,"completed":16,"skipped":351,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:37.368: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let cr conversion webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the custom resource conversion webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:47:37.992: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:47:41.021: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Apr 22 13:47:42.020: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Apr 22 13:47:43.021: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Apr 22 13:47:44.020: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Apr 22 13:47:45.021: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Apr 22 13:47:46.021: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 Apr 22 13:47:47.020: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:47:47.024: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Creating a v1 custom resource �[1mSTEP�[0m: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:50.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-webhook-2479" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":-1,"completed":16,"skipped":311,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:48.967: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] Replicaset should have a working scale subresource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating replica set "test-rs" that asks for more than the allowed pod quota Apr 22 13:47:49.013: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: getting scale subresource �[1mSTEP�[0m: updating a scale subresource �[1mSTEP�[0m: verifying the replicaset Spec.Replicas was modified �[1mSTEP�[0m: Patch a scale subresource [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:51.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-6872" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":-1,"completed":17,"skipped":360,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:51.065: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should delete a collection of services [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a collection of services Apr 22 13:47:51.090: INFO: Creating e2e-svc-a-hrsms Apr 22 13:47:51.099: INFO: Creating e2e-svc-b-92gx7 Apr 22 13:47:51.109: INFO: Creating e2e-svc-c-67grq �[1mSTEP�[0m: deleting service collection Apr 22 13:47:51.147: INFO: Collection of services has been deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:51.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-274" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should delete a collection of services [Conformance]","total":-1,"completed":18,"skipped":361,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:50.277: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on tmpfs Apr 22 13:47:50.315: INFO: Waiting up to 5m0s for pod "pod-3c2035a5-2dff-4761-9632-abdc6ab44aed" in namespace "emptydir-3389" to be "Succeeded or Failed" Apr 22 13:47:50.323: INFO: Pod "pod-3c2035a5-2dff-4761-9632-abdc6ab44aed": Phase="Pending", Reason="", readiness=false. Elapsed: 7.530565ms Apr 22 13:47:52.326: INFO: Pod "pod-3c2035a5-2dff-4761-9632-abdc6ab44aed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011212184s Apr 22 13:47:54.331: INFO: Pod "pod-3c2035a5-2dff-4761-9632-abdc6ab44aed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016009827s �[1mSTEP�[0m: Saw pod success Apr 22 13:47:54.331: INFO: Pod "pod-3c2035a5-2dff-4761-9632-abdc6ab44aed" satisfied condition "Succeeded or Failed" Apr 22 13:47:54.334: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod pod-3c2035a5-2dff-4761-9632-abdc6ab44aed container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:47:54.348: INFO: Waiting for pod pod-3c2035a5-2dff-4761-9632-abdc6ab44aed to disappear Apr 22 13:47:54.350: INFO: Pod pod-3c2035a5-2dff-4761-9632-abdc6ab44aed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:54.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-3389" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":355,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:51.186: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating secret secrets-8456/secret-test-1eb06627-afda-4d46-8ae9-a099af7d04cf �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 22 13:47:51.221: INFO: Waiting up to 5m0s for pod "pod-configmaps-20ba3fd9-8e15-4bcd-895e-309a33d587ab" in namespace "secrets-8456" to be "Succeeded or Failed" Apr 22 13:47:51.224: INFO: Pod "pod-configmaps-20ba3fd9-8e15-4bcd-895e-309a33d587ab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.176186ms Apr 22 13:47:53.229: INFO: Pod "pod-configmaps-20ba3fd9-8e15-4bcd-895e-309a33d587ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008003836s Apr 22 13:47:55.234: INFO: Pod "pod-configmaps-20ba3fd9-8e15-4bcd-895e-309a33d587ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013045915s �[1mSTEP�[0m: Saw pod success Apr 22 13:47:55.234: INFO: Pod "pod-configmaps-20ba3fd9-8e15-4bcd-895e-309a33d587ab" satisfied condition "Succeeded or Failed" Apr 22 13:47:55.239: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-configmaps-20ba3fd9-8e15-4bcd-895e-309a33d587ab container env-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:47:55.263: INFO: Waiting for pod pod-configmaps-20ba3fd9-8e15-4bcd-895e-309a33d587ab to disappear Apr 22 13:47:55.268: INFO: Pod pod-configmaps-20ba3fd9-8e15-4bcd-895e-309a33d587ab no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:55.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-8456" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":382,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:54.405: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:47:54.432: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ca69ba60-0cd5-46f5-8939-0a7b01b4bac3" in namespace "downward-api-5603" to be "Succeeded or Failed" Apr 22 13:47:54.435: INFO: Pod "downwardapi-volume-ca69ba60-0cd5-46f5-8939-0a7b01b4bac3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.847405ms Apr 22 13:47:56.438: INFO: Pod "downwardapi-volume-ca69ba60-0cd5-46f5-8939-0a7b01b4bac3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006244449s Apr 22 13:47:58.443: INFO: Pod "downwardapi-volume-ca69ba60-0cd5-46f5-8939-0a7b01b4bac3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01134142s �[1mSTEP�[0m: Saw pod success Apr 22 13:47:58.443: INFO: Pod "downwardapi-volume-ca69ba60-0cd5-46f5-8939-0a7b01b4bac3" satisfied condition "Succeeded or Failed" Apr 22 13:47:58.447: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod downwardapi-volume-ca69ba60-0cd5-46f5-8939-0a7b01b4bac3 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:47:58.461: INFO: Waiting for pod downwardapi-volume-ca69ba60-0cd5-46f5-8939-0a7b01b4bac3 to disappear Apr 22 13:47:58.464: INFO: Pod downwardapi-volume-ca69ba60-0cd5-46f5-8939-0a7b01b4bac3 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:58.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5603" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":18,"skipped":388,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:55.338: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1333 �[1mSTEP�[0m: creating the pod Apr 22 13:47:55.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9485 create -f -' Apr 22 13:47:56.312: INFO: stderr: "" Apr 22 13:47:56.312: INFO: stdout: "pod/pause created\n" Apr 22 13:47:56.312: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 22 13:47:56.312: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9485" to be "running and ready" Apr 22 13:47:56.318: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.339596ms Apr 22 13:47:58.322: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 2.009482512s Apr 22 13:47:58.322: INFO: Pod "pause" satisfied condition "running and ready" Apr 22 13:47:58.322: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: adding the label testing-label with value testing-label-value to a pod Apr 22 13:47:58.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9485 label pods pause testing-label=testing-label-value' Apr 22 13:47:58.416: INFO: stderr: "" Apr 22 13:47:58.416: INFO: stdout: "pod/pause labeled\n" �[1mSTEP�[0m: verifying the pod has the label testing-label with the value testing-label-value Apr 22 13:47:58.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9485 get pod pause -L testing-label' Apr 22 13:47:58.498: INFO: stderr: "" Apr 22 13:47:58.498: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s testing-label-value\n" �[1mSTEP�[0m: removing the label testing-label of a pod Apr 22 13:47:58.498: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9485 label pods pause testing-label-' Apr 22 13:47:58.597: INFO: stderr: "" Apr 22 13:47:58.597: INFO: stdout: "pod/pause unlabeled\n" �[1mSTEP�[0m: verifying the pod doesn't have the label testing-label Apr 22 13:47:58.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9485 get pod pause -L testing-label' Apr 22 13:47:58.667: INFO: stderr: "" Apr 22 13:47:58.667: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 2s \n" [AfterEach] Kubectl label /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1339 �[1mSTEP�[0m: using delete to clean up resources Apr 22 13:47:58.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9485 delete --grace-period=0 --force -f -' Apr 22 13:47:58.746: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 13:47:58.746: INFO: stdout: "pod \"pause\" force deleted\n" Apr 22 13:47:58.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9485 get rc,svc -l name=pause --no-headers' Apr 22 13:47:58.819: INFO: stderr: "No resources found in kubectl-9485 namespace.\n" Apr 22 13:47:58.819: INFO: stdout: "" Apr 22 13:47:58.819: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9485 get pods -l name=pause -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 13:47:58.888: INFO: stderr: "" Apr 22 13:47:58.888: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:47:58.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9485" for this suite. �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:58.524: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:47:58.563: INFO: The status of Pod server-envvars-d8200267-26c7-487d-a833-d51864de5e78 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:48:00.567: INFO: The status of Pod server-envvars-d8200267-26c7-487d-a833-d51864de5e78 is Running (Ready = true) Apr 22 13:48:00.592: INFO: Waiting up to 5m0s for pod "client-envvars-05d26e2d-3ec8-414f-af2e-d1318996eb71" in namespace "pods-8924" to be "Succeeded or Failed" Apr 22 13:48:00.598: INFO: Pod "client-envvars-05d26e2d-3ec8-414f-af2e-d1318996eb71": Phase="Pending", Reason="", readiness=false. Elapsed: 5.875883ms Apr 22 13:48:02.603: INFO: Pod "client-envvars-05d26e2d-3ec8-414f-af2e-d1318996eb71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011032261s Apr 22 13:48:04.607: INFO: Pod "client-envvars-05d26e2d-3ec8-414f-af2e-d1318996eb71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015212668s �[1mSTEP�[0m: Saw pod success Apr 22 13:48:04.607: INFO: Pod "client-envvars-05d26e2d-3ec8-414f-af2e-d1318996eb71" satisfied condition "Succeeded or Failed" Apr 22 13:48:04.610: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod client-envvars-05d26e2d-3ec8-414f-af2e-d1318996eb71 container env3cont: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:48:04.626: INFO: Waiting for pod client-envvars-05d26e2d-3ec8-414f-af2e-d1318996eb71 to disappear Apr 22 13:48:04.629: INFO: Pod client-envvars-05d26e2d-3ec8-414f-af2e-d1318996eb71 no longer exists [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:04.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8924" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":415,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":-1,"completed":20,"skipped":414,"failed":0} [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:58.900: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should create a PodDisruptionBudget [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: updating the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: patching the pdb �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be deleted [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:04.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-3506" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":-1,"completed":21,"skipped":414,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:04.706: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename discovery �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/discovery.go:39 �[1mSTEP�[0m: Setting up server cert [It] should validate PreferredVersion for each APIGroup [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:48:05.367: INFO: Checking APIGroup: apiregistration.k8s.io Apr 22 13:48:05.368: INFO: PreferredVersion.GroupVersion: apiregistration.k8s.io/v1 Apr 22 13:48:05.368: INFO: Versions found [{apiregistration.k8s.io/v1 v1}] Apr 22 13:48:05.368: INFO: apiregistration.k8s.io/v1 matches apiregistration.k8s.io/v1 Apr 22 13:48:05.368: INFO: Checking APIGroup: apps Apr 22 13:48:05.369: INFO: PreferredVersion.GroupVersion: apps/v1 Apr 22 13:48:05.369: INFO: Versions found [{apps/v1 v1}] Apr 22 13:48:05.369: INFO: apps/v1 matches apps/v1 Apr 22 13:48:05.369: INFO: Checking APIGroup: events.k8s.io Apr 22 13:48:05.370: INFO: PreferredVersion.GroupVersion: events.k8s.io/v1 Apr 22 13:48:05.371: INFO: Versions found [{events.k8s.io/v1 v1} {events.k8s.io/v1beta1 v1beta1}] Apr 22 13:48:05.371: INFO: events.k8s.io/v1 matches events.k8s.io/v1 Apr 22 13:48:05.371: INFO: Checking APIGroup: authentication.k8s.io Apr 22 13:48:05.372: INFO: PreferredVersion.GroupVersion: authentication.k8s.io/v1 Apr 22 13:48:05.372: INFO: Versions found [{authentication.k8s.io/v1 v1}] Apr 22 13:48:05.372: INFO: authentication.k8s.io/v1 matches authentication.k8s.io/v1 Apr 22 13:48:05.372: INFO: Checking APIGroup: authorization.k8s.io Apr 22 13:48:05.373: INFO: PreferredVersion.GroupVersion: authorization.k8s.io/v1 Apr 22 13:48:05.373: INFO: Versions found [{authorization.k8s.io/v1 v1}] Apr 22 13:48:05.373: INFO: authorization.k8s.io/v1 matches authorization.k8s.io/v1 Apr 22 13:48:05.373: INFO: Checking APIGroup: autoscaling Apr 22 13:48:05.374: INFO: PreferredVersion.GroupVersion: autoscaling/v2 Apr 22 13:48:05.374: INFO: Versions found [{autoscaling/v2 v2} {autoscaling/v1 v1} {autoscaling/v2beta1 v2beta1} {autoscaling/v2beta2 v2beta2}] Apr 22 13:48:05.374: INFO: autoscaling/v2 matches autoscaling/v2 Apr 22 13:48:05.374: INFO: Checking APIGroup: batch Apr 22 13:48:05.376: INFO: PreferredVersion.GroupVersion: batch/v1 Apr 22 13:48:05.376: INFO: Versions found [{batch/v1 v1} {batch/v1beta1 v1beta1}] Apr 22 13:48:05.376: INFO: batch/v1 matches batch/v1 Apr 22 13:48:05.376: INFO: Checking APIGroup: certificates.k8s.io Apr 22 13:48:05.377: INFO: PreferredVersion.GroupVersion: certificates.k8s.io/v1 Apr 22 13:48:05.377: INFO: Versions found [{certificates.k8s.io/v1 v1}] Apr 22 13:48:05.377: INFO: certificates.k8s.io/v1 matches certificates.k8s.io/v1 Apr 22 13:48:05.377: INFO: Checking APIGroup: networking.k8s.io Apr 22 13:48:05.378: INFO: PreferredVersion.GroupVersion: networking.k8s.io/v1 Apr 22 13:48:05.378: INFO: Versions found [{networking.k8s.io/v1 v1}] Apr 22 13:48:05.378: INFO: networking.k8s.io/v1 matches networking.k8s.io/v1 Apr 22 13:48:05.378: INFO: Checking APIGroup: policy Apr 22 13:48:05.379: INFO: PreferredVersion.GroupVersion: policy/v1 Apr 22 13:48:05.379: INFO: Versions found [{policy/v1 v1} {policy/v1beta1 v1beta1}] Apr 22 13:48:05.379: INFO: policy/v1 matches policy/v1 Apr 22 13:48:05.379: INFO: Checking APIGroup: rbac.authorization.k8s.io Apr 22 13:48:05.380: INFO: PreferredVersion.GroupVersion: rbac.authorization.k8s.io/v1 Apr 22 13:48:05.380: INFO: Versions found [{rbac.authorization.k8s.io/v1 v1}] Apr 22 13:48:05.380: INFO: rbac.authorization.k8s.io/v1 matches rbac.authorization.k8s.io/v1 Apr 22 13:48:05.380: INFO: Checking APIGroup: storage.k8s.io Apr 22 13:48:05.381: INFO: PreferredVersion.GroupVersion: storage.k8s.io/v1 Apr 22 13:48:05.381: INFO: Versions found [{storage.k8s.io/v1 v1} {storage.k8s.io/v1beta1 v1beta1}] Apr 22 13:48:05.381: INFO: storage.k8s.io/v1 matches storage.k8s.io/v1 Apr 22 13:48:05.381: INFO: Checking APIGroup: admissionregistration.k8s.io Apr 22 13:48:05.382: INFO: PreferredVersion.GroupVersion: admissionregistration.k8s.io/v1 Apr 22 13:48:05.382: INFO: Versions found [{admissionregistration.k8s.io/v1 v1}] Apr 22 13:48:05.382: INFO: admissionregistration.k8s.io/v1 matches admissionregistration.k8s.io/v1 Apr 22 13:48:05.382: INFO: Checking APIGroup: apiextensions.k8s.io Apr 22 13:48:05.383: INFO: PreferredVersion.GroupVersion: apiextensions.k8s.io/v1 Apr 22 13:48:05.383: INFO: Versions found [{apiextensions.k8s.io/v1 v1}] Apr 22 13:48:05.383: INFO: apiextensions.k8s.io/v1 matches apiextensions.k8s.io/v1 Apr 22 13:48:05.383: INFO: Checking APIGroup: scheduling.k8s.io Apr 22 13:48:05.385: INFO: PreferredVersion.GroupVersion: scheduling.k8s.io/v1 Apr 22 13:48:05.385: INFO: Versions found [{scheduling.k8s.io/v1 v1}] Apr 22 13:48:05.385: INFO: scheduling.k8s.io/v1 matches scheduling.k8s.io/v1 Apr 22 13:48:05.385: INFO: Checking APIGroup: coordination.k8s.io Apr 22 13:48:05.386: INFO: PreferredVersion.GroupVersion: coordination.k8s.io/v1 Apr 22 13:48:05.386: INFO: Versions found [{coordination.k8s.io/v1 v1}] Apr 22 13:48:05.386: INFO: coordination.k8s.io/v1 matches coordination.k8s.io/v1 Apr 22 13:48:05.386: INFO: Checking APIGroup: node.k8s.io Apr 22 13:48:05.387: INFO: PreferredVersion.GroupVersion: node.k8s.io/v1 Apr 22 13:48:05.387: INFO: Versions found [{node.k8s.io/v1 v1} {node.k8s.io/v1beta1 v1beta1}] Apr 22 13:48:05.387: INFO: node.k8s.io/v1 matches node.k8s.io/v1 Apr 22 13:48:05.387: INFO: Checking APIGroup: discovery.k8s.io Apr 22 13:48:05.389: INFO: PreferredVersion.GroupVersion: discovery.k8s.io/v1 Apr 22 13:48:05.389: INFO: Versions found [{discovery.k8s.io/v1 v1} {discovery.k8s.io/v1beta1 v1beta1}] Apr 22 13:48:05.389: INFO: discovery.k8s.io/v1 matches discovery.k8s.io/v1 Apr 22 13:48:05.389: INFO: Checking APIGroup: flowcontrol.apiserver.k8s.io Apr 22 13:48:05.390: INFO: PreferredVersion.GroupVersion: flowcontrol.apiserver.k8s.io/v1beta2 Apr 22 13:48:05.390: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta2 v1beta2} {flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}] Apr 22 13:48:05.390: INFO: flowcontrol.apiserver.k8s.io/v1beta2 matches flowcontrol.apiserver.k8s.io/v1beta2 [AfterEach] [sig-api-machinery] Discovery /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:05.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "discovery-2212" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":-1,"completed":20,"skipped":464,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:04.991: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:48:05.021: INFO: The status of Pod busybox-host-aliases9a4fefa2-aca3-4f21-9fe1-b3baa81bddc9 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:48:07.026: INFO: The status of Pod busybox-host-aliases9a4fefa2-aca3-4f21-9fe1-b3baa81bddc9 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:07.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-5581" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":426,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:05.406: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Given a ReplicationController is created �[1mSTEP�[0m: When the matched label of one of its pods change Apr 22 13:48:05.439: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 22 13:48:10.448: INFO: Pod name pod-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:11.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-8614" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":-1,"completed":21,"skipped":468,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:11.573: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test env composition Apr 22 13:48:11.604: INFO: Waiting up to 5m0s for pod "var-expansion-462b4624-11c7-4cfc-b7cb-95b7ce784d96" in namespace "var-expansion-2842" to be "Succeeded or Failed" Apr 22 13:48:11.609: INFO: Pod "var-expansion-462b4624-11c7-4cfc-b7cb-95b7ce784d96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.996377ms Apr 22 13:48:13.614: INFO: Pod "var-expansion-462b4624-11c7-4cfc-b7cb-95b7ce784d96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010032182s Apr 22 13:48:15.619: INFO: Pod "var-expansion-462b4624-11c7-4cfc-b7cb-95b7ce784d96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014344391s �[1mSTEP�[0m: Saw pod success Apr 22 13:48:15.619: INFO: Pod "var-expansion-462b4624-11c7-4cfc-b7cb-95b7ce784d96" satisfied condition "Succeeded or Failed" Apr 22 13:48:15.622: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod var-expansion-462b4624-11c7-4cfc-b7cb-95b7ce784d96 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:48:15.634: INFO: Waiting for pod var-expansion-462b4624-11c7-4cfc-b7cb-95b7ce784d96 to disappear Apr 22 13:48:15.638: INFO: Pod var-expansion-462b4624-11c7-4cfc-b7cb-95b7ce784d96 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:15.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-2842" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":-1,"completed":22,"skipped":528,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:15.667: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Apr 22 13:48:15.696: INFO: Waiting up to 5m0s for pod "downward-api-269e1f1b-cd77-4426-940a-a6802e038991" in namespace "downward-api-2018" to be "Succeeded or Failed" Apr 22 13:48:15.700: INFO: Pod "downward-api-269e1f1b-cd77-4426-940a-a6802e038991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.993632ms Apr 22 13:48:17.703: INFO: Pod "downward-api-269e1f1b-cd77-4426-940a-a6802e038991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006784734s Apr 22 13:48:19.707: INFO: Pod "downward-api-269e1f1b-cd77-4426-940a-a6802e038991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010958638s �[1mSTEP�[0m: Saw pod success Apr 22 13:48:19.708: INFO: Pod "downward-api-269e1f1b-cd77-4426-940a-a6802e038991" satisfied condition "Succeeded or Failed" Apr 22 13:48:19.711: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-kmb2d pod downward-api-269e1f1b-cd77-4426-940a-a6802e038991 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:48:19.726: INFO: Waiting for pod downward-api-269e1f1b-cd77-4426-940a-a6802e038991 to disappear Apr 22 13:48:19.729: INFO: Pod downward-api-269e1f1b-cd77-4426-940a-a6802e038991 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:19.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2018" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":-1,"completed":23,"skipped":543,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:19.765: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Apr 22 13:48:19.792: INFO: Waiting up to 5m0s for pod "downward-api-b21812bc-df2e-4f21-8364-4a1f0615e113" in namespace "downward-api-7338" to be "Succeeded or Failed" Apr 22 13:48:19.795: INFO: Pod "downward-api-b21812bc-df2e-4f21-8364-4a1f0615e113": Phase="Pending", Reason="", readiness=false. Elapsed: 2.657154ms Apr 22 13:48:21.799: INFO: Pod "downward-api-b21812bc-df2e-4f21-8364-4a1f0615e113": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006722778s Apr 22 13:48:23.804: INFO: Pod "downward-api-b21812bc-df2e-4f21-8364-4a1f0615e113": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012351108s �[1mSTEP�[0m: Saw pod success Apr 22 13:48:23.804: INFO: Pod "downward-api-b21812bc-df2e-4f21-8364-4a1f0615e113" satisfied condition "Succeeded or Failed" Apr 22 13:48:23.807: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod downward-api-b21812bc-df2e-4f21-8364-4a1f0615e113 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:48:23.822: INFO: Waiting for pod downward-api-b21812bc-df2e-4f21-8364-4a1f0615e113 to disappear Apr 22 13:48:23.824: INFO: Pod downward-api-b21812bc-df2e-4f21-8364-4a1f0615e113 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:23.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-7338" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":565,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:23.900: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting a starting resourceVersion �[1mSTEP�[0m: starting a background goroutine to produce watch events �[1mSTEP�[0m: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:26.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-7837" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":-1,"completed":25,"skipped":616,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:26.852: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:48:26.883: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e270ad55-727b-4bde-925c-c34c4e4afa7e" in namespace "downward-api-5007" to be "Succeeded or Failed" Apr 22 13:48:26.886: INFO: Pod "downwardapi-volume-e270ad55-727b-4bde-925c-c34c4e4afa7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.785262ms Apr 22 13:48:28.890: INFO: Pod "downwardapi-volume-e270ad55-727b-4bde-925c-c34c4e4afa7e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006939631s Apr 22 13:48:30.895: INFO: Pod "downwardapi-volume-e270ad55-727b-4bde-925c-c34c4e4afa7e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011028864s �[1mSTEP�[0m: Saw pod success Apr 22 13:48:30.895: INFO: Pod "downwardapi-volume-e270ad55-727b-4bde-925c-c34c4e4afa7e" satisfied condition "Succeeded or Failed" Apr 22 13:48:30.898: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod downwardapi-volume-e270ad55-727b-4bde-925c-c34c4e4afa7e container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:48:30.914: INFO: Waiting for pod downwardapi-volume-e270ad55-727b-4bde-925c-c34c4e4afa7e to disappear Apr 22 13:48:30.918: INFO: Pod downwardapi-volume-e270ad55-727b-4bde-925c-c34c4e4afa7e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:30.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5007" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":643,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:30.943: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-932b152f-ded2-4b02-832e-0f92ef2dbd7f �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 13:48:30.979: INFO: Waiting up to 5m0s for pod "pod-configmaps-d9e93b0c-2bab-4ce6-b963-8d1758a0acfd" in namespace "configmap-7946" to be "Succeeded or Failed" Apr 22 13:48:30.983: INFO: Pod "pod-configmaps-d9e93b0c-2bab-4ce6-b963-8d1758a0acfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.603079ms Apr 22 13:48:32.989: INFO: Pod "pod-configmaps-d9e93b0c-2bab-4ce6-b963-8d1758a0acfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00985174s Apr 22 13:48:34.993: INFO: Pod "pod-configmaps-d9e93b0c-2bab-4ce6-b963-8d1758a0acfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014486828s �[1mSTEP�[0m: Saw pod success Apr 22 13:48:34.993: INFO: Pod "pod-configmaps-d9e93b0c-2bab-4ce6-b963-8d1758a0acfd" satisfied condition "Succeeded or Failed" Apr 22 13:48:34.996: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-configmaps-d9e93b0c-2bab-4ce6-b963-8d1758a0acfd container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:48:35.011: INFO: Waiting for pod pod-configmaps-d9e93b0c-2bab-4ce6-b963-8d1758a0acfd to disappear Apr 22 13:48:35.014: INFO: Pod pod-configmaps-d9e93b0c-2bab-4ce6-b963-8d1758a0acfd no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:35.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-7946" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":651,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:38.653: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:47:39.314: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:47:42.334: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 22 13:47:43.334: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 22 13:47:44.334: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 22 13:47:45.333: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 Apr 22 13:47:46.334: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:47:46.338: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API Apr 22 13:47:56.864: INFO: Waiting for webhook configuration to be ready... Apr 22 13:48:06.973: INFO: Waiting for webhook configuration to be ready... Apr 22 13:48:17.077: INFO: Waiting for webhook configuration to be ready... Apr 22 13:48:27.176: INFO: Waiting for webhook configuration to be ready... Apr 22 13:48:37.185: INFO: Waiting for webhook configuration to be ready... Apr 22 13:48:37.186: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002bc2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhookForCustomResource(0xc000c23e40, {0xc004cb4320, 0xc}, 0xc003e761e0, 0xc00252e3e0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 +0x7ea k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.6() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:224 +0xad k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2456919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000525040, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:37.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-7159" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-7159-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [59.102 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny custom resource creation, update and deletion [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 13:48:37.186: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002bc2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1727 �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:35.044: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-437f8b44-4a91-4ecd-9c56-b7d26283f913 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 13:48:35.076: INFO: Waiting up to 5m0s for pod "pod-configmaps-5ed74a9f-b638-4d62-a33f-5af54a5e9f52" in namespace "configmap-4838" to be "Succeeded or Failed" Apr 22 13:48:35.080: INFO: Pod "pod-configmaps-5ed74a9f-b638-4d62-a33f-5af54a5e9f52": Phase="Pending", Reason="", readiness=false. Elapsed: 3.08322ms Apr 22 13:48:37.083: INFO: Pod "pod-configmaps-5ed74a9f-b638-4d62-a33f-5af54a5e9f52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006731692s Apr 22 13:48:39.089: INFO: Pod "pod-configmaps-5ed74a9f-b638-4d62-a33f-5af54a5e9f52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011984983s �[1mSTEP�[0m: Saw pod success Apr 22 13:48:39.089: INFO: Pod "pod-configmaps-5ed74a9f-b638-4d62-a33f-5af54a5e9f52" satisfied condition "Succeeded or Failed" Apr 22 13:48:39.092: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod pod-configmaps-5ed74a9f-b638-4d62-a33f-5af54a5e9f52 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:48:39.110: INFO: Waiting for pod pod-configmaps-5ed74a9f-b638-4d62-a33f-5af54a5e9f52 to disappear Apr 22 13:48:39.112: INFO: Pod pod-configmaps-5ed74a9f-b638-4d62-a33f-5af54a5e9f52 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:39.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-4838" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":28,"skipped":666,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:07.074: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9708 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9708;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9708 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9708;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9708.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9708.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9708.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9708.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9708.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9708.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9708.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9708.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9708.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9708.svc;check="$$(dig +notcp +noall +answer +search 60.58.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.58.60_udp@PTR;check="$$(dig +tcp +noall +answer +search 60.58.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.58.60_tcp@PTR;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9708 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9708;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9708 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9708;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9708.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9708.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9708.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9708.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9708.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9708.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9708.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9708.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9708.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9708.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9708.svc;check="$$(dig +notcp +noall +answer +search 60.58.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.58.60_udp@PTR;check="$$(dig +tcp +noall +answer +search 60.58.132.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.132.58.60_tcp@PTR;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 22 13:48:09.150: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.155: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.158: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.162: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.165: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.169: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.172: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.175: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.191: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.194: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.197: INFO: Unable to read jessie_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.199: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.202: INFO: Unable to read jessie_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.205: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.208: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.211: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:09.223: INFO: Lookups using dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9708 wheezy_tcp@dns-test-service.dns-9708 wheezy_udp@dns-test-service.dns-9708.svc wheezy_tcp@dns-test-service.dns-9708.svc wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9708 jessie_tcp@dns-test-service.dns-9708 jessie_udp@dns-test-service.dns-9708.svc jessie_tcp@dns-test-service.dns-9708.svc jessie_udp@_http._tcp.dns-test-service.dns-9708.svc jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc] Apr 22 13:48:14.230: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.234: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.239: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.245: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.253: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.258: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.263: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.267: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.305: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.310: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.314: INFO: Unable to read jessie_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.319: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.323: INFO: Unable to read jessie_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.327: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.330: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.334: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:14.350: INFO: Lookups using dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9708 wheezy_tcp@dns-test-service.dns-9708 wheezy_udp@dns-test-service.dns-9708.svc wheezy_tcp@dns-test-service.dns-9708.svc wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9708 jessie_tcp@dns-test-service.dns-9708 jessie_udp@dns-test-service.dns-9708.svc jessie_tcp@dns-test-service.dns-9708.svc jessie_udp@_http._tcp.dns-test-service.dns-9708.svc jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc] Apr 22 13:48:19.227: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.230: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.234: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.237: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.240: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.244: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.247: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.251: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.269: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.272: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.274: INFO: Unable to read jessie_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.278: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.281: INFO: Unable to read jessie_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.283: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.286: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.289: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:19.300: INFO: Lookups using dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9708 wheezy_tcp@dns-test-service.dns-9708 wheezy_udp@dns-test-service.dns-9708.svc wheezy_tcp@dns-test-service.dns-9708.svc wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9708 jessie_tcp@dns-test-service.dns-9708 jessie_udp@dns-test-service.dns-9708.svc jessie_tcp@dns-test-service.dns-9708.svc jessie_udp@_http._tcp.dns-test-service.dns-9708.svc jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc] Apr 22 13:48:24.230: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.238: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.242: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.245: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.248: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.252: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.255: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.259: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.276: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.279: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.282: INFO: Unable to read jessie_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.286: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.291: INFO: Unable to read jessie_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.295: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.299: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.302: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:24.317: INFO: Lookups using dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9708 wheezy_tcp@dns-test-service.dns-9708 wheezy_udp@dns-test-service.dns-9708.svc wheezy_tcp@dns-test-service.dns-9708.svc wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9708 jessie_tcp@dns-test-service.dns-9708 jessie_udp@dns-test-service.dns-9708.svc jessie_tcp@dns-test-service.dns-9708.svc jessie_udp@_http._tcp.dns-test-service.dns-9708.svc jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc] Apr 22 13:48:29.228: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.232: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.235: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.239: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.242: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.246: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.249: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.253: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.269: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.271: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.274: INFO: Unable to read jessie_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.277: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.280: INFO: Unable to read jessie_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.282: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.285: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.288: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:29.300: INFO: Lookups using dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9708 wheezy_tcp@dns-test-service.dns-9708 wheezy_udp@dns-test-service.dns-9708.svc wheezy_tcp@dns-test-service.dns-9708.svc wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9708 jessie_tcp@dns-test-service.dns-9708 jessie_udp@dns-test-service.dns-9708.svc jessie_tcp@dns-test-service.dns-9708.svc jessie_udp@_http._tcp.dns-test-service.dns-9708.svc jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc] Apr 22 13:48:34.231: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.235: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.238: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.242: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.246: INFO: Unable to read wheezy_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.250: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.254: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.257: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.274: INFO: Unable to read jessie_udp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.277: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.280: INFO: Unable to read jessie_udp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.282: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708 from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.285: INFO: Unable to read jessie_udp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.289: INFO: Unable to read jessie_tcp@dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.292: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.295: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc from pod dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963: the server could not find the requested resource (get pods dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963) Apr 22 13:48:34.308: INFO: Lookups using dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-9708 wheezy_tcp@dns-test-service.dns-9708 wheezy_udp@dns-test-service.dns-9708.svc wheezy_tcp@dns-test-service.dns-9708.svc wheezy_udp@_http._tcp.dns-test-service.dns-9708.svc wheezy_tcp@_http._tcp.dns-test-service.dns-9708.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-9708 jessie_tcp@dns-test-service.dns-9708 jessie_udp@dns-test-service.dns-9708.svc jessie_tcp@dns-test-service.dns-9708.svc jessie_udp@_http._tcp.dns-test-service.dns-9708.svc jessie_tcp@_http._tcp.dns-test-service.dns-9708.svc] Apr 22 13:48:39.312: INFO: DNS probes using dns-9708/dns-test-b67df10b-fe12-4089-a488-5aa3a0aa9963 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test service �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:39.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-9708" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":-1,"completed":23,"skipped":450,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":15,"skipped":444,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:37.757: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:48:38.271: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:48:41.296: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:48:41.300: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the custom resource webhook via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be denied by the webhook �[1mSTEP�[0m: Creating a custom resource whose deletion would be denied by the webhook �[1mSTEP�[0m: Updating the custom resource with disallowed data should be denied �[1mSTEP�[0m: Deleting the custom resource should be denied �[1mSTEP�[0m: Remove the offending key and value from the custom resource data �[1mSTEP�[0m: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:44.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6592" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6592-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":-1,"completed":16,"skipped":444,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:44.498: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-68adbb87-89e8-4564-b321-d91e654d7392 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 13:48:44.600: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-63a12f6e-467d-4da0-aad2-443764fc90fd" in namespace "projected-3589" to be "Succeeded or Failed" Apr 22 13:48:44.606: INFO: Pod "pod-projected-configmaps-63a12f6e-467d-4da0-aad2-443764fc90fd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.992034ms Apr 22 13:48:46.611: INFO: Pod "pod-projected-configmaps-63a12f6e-467d-4da0-aad2-443764fc90fd": Phase="Running", Reason="", readiness=false. Elapsed: 2.010418065s Apr 22 13:48:48.614: INFO: Pod "pod-projected-configmaps-63a12f6e-467d-4da0-aad2-443764fc90fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014207678s �[1mSTEP�[0m: Saw pod success Apr 22 13:48:48.614: INFO: Pod "pod-projected-configmaps-63a12f6e-467d-4da0-aad2-443764fc90fd" satisfied condition "Succeeded or Failed" Apr 22 13:48:48.617: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-projected-configmaps-63a12f6e-467d-4da0-aad2-443764fc90fd container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:48:48.628: INFO: Waiting for pod pod-projected-configmaps-63a12f6e-467d-4da0-aad2-443764fc90fd to disappear Apr 22 13:48:48.630: INFO: Pod pod-projected-configmaps-63a12f6e-467d-4da0-aad2-443764fc90fd no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:48.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3589" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":17,"skipped":444,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:48.674: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of pod templates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pod templates Apr 22 13:48:48.704: INFO: created test-podtemplate-1 Apr 22 13:48:48.708: INFO: created test-podtemplate-2 Apr 22 13:48:48.712: INFO: created test-podtemplate-3 �[1mSTEP�[0m: get a list of pod templates with a label in the current namespace �[1mSTEP�[0m: delete collection of pod templates Apr 22 13:48:48.715: INFO: requesting DeleteCollection of pod templates �[1mSTEP�[0m: check that the list of pod templates matches the requested quantity Apr 22 13:48:48.727: INFO: requesting list of pod templates to confirm quantity [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:48.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-5166" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":-1,"completed":18,"skipped":472,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:48.757: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-deaf733c-303e-4a82-b356-be6097d0e702 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 13:48:48.789: INFO: Waiting up to 5m0s for pod "pod-configmaps-b45a1c95-747c-4e6f-9791-86d44c89d865" in namespace "configmap-13" to be "Succeeded or Failed" Apr 22 13:48:48.799: INFO: Pod "pod-configmaps-b45a1c95-747c-4e6f-9791-86d44c89d865": Phase="Pending", Reason="", readiness=false. Elapsed: 10.609354ms Apr 22 13:48:50.803: INFO: Pod "pod-configmaps-b45a1c95-747c-4e6f-9791-86d44c89d865": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014655491s Apr 22 13:48:52.807: INFO: Pod "pod-configmaps-b45a1c95-747c-4e6f-9791-86d44c89d865": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018651114s �[1mSTEP�[0m: Saw pod success Apr 22 13:48:52.807: INFO: Pod "pod-configmaps-b45a1c95-747c-4e6f-9791-86d44c89d865" satisfied condition "Succeeded or Failed" Apr 22 13:48:52.810: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-configmaps-b45a1c95-747c-4e6f-9791-86d44c89d865 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:48:52.826: INFO: Waiting for pod pod-configmaps-b45a1c95-747c-4e6f-9791-86d44c89d865 to disappear Apr 22 13:48:52.829: INFO: Pod pod-configmaps-b45a1c95-747c-4e6f-9791-86d44c89d865 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:52.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-13" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":19,"skipped":479,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:52.863: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:52.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-7846" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":20,"skipped":496,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:53.002: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap configmap-9424/configmap-test-52b3d0db-cb9e-4c97-b635-96b783a9e803 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 13:48:53.036: INFO: Waiting up to 5m0s for pod "pod-configmaps-d9f6badf-6c6a-466c-a2e8-05c7c01a584f" in namespace "configmap-9424" to be "Succeeded or Failed" Apr 22 13:48:53.040: INFO: Pod "pod-configmaps-d9f6badf-6c6a-466c-a2e8-05c7c01a584f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.866738ms Apr 22 13:48:55.044: INFO: Pod "pod-configmaps-d9f6badf-6c6a-466c-a2e8-05c7c01a584f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007982539s Apr 22 13:48:57.049: INFO: Pod "pod-configmaps-d9f6badf-6c6a-466c-a2e8-05c7c01a584f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013335943s �[1mSTEP�[0m: Saw pod success Apr 22 13:48:57.049: INFO: Pod "pod-configmaps-d9f6badf-6c6a-466c-a2e8-05c7c01a584f" satisfied condition "Succeeded or Failed" Apr 22 13:48:57.052: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-configmaps-d9f6badf-6c6a-466c-a2e8-05c7c01a584f container env-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:48:57.066: INFO: Waiting for pod pod-configmaps-d9f6badf-6c6a-466c-a2e8-05c7c01a584f to disappear Apr 22 13:48:57.068: INFO: Pod pod-configmaps-d9f6badf-6c6a-466c-a2e8-05c7c01a584f no longer exists [AfterEach] [sig-node] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:48:57.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9424" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":-1,"completed":21,"skipped":546,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:57.091: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:48:57.419: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:49:00.439: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Listing all of the created validation webhooks �[1mSTEP�[0m: Creating a configMap that should be mutated �[1mSTEP�[0m: Deleting the collection of validation webhooks �[1mSTEP�[0m: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:49:00.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4102" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4102-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":-1,"completed":22,"skipped":554,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:49:00.620: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl diff finds a difference for Deployments [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create deployment with httpd image Apr 22 13:49:00.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7021 create -f -' Apr 22 13:49:01.470: INFO: stderr: "" Apr 22 13:49:01.470: INFO: stdout: "deployment.apps/httpd-deployment created\n" �[1mSTEP�[0m: verify diff finds difference between live and declared image Apr 22 13:49:01.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7021 diff -f -' Apr 22 13:49:01.755: INFO: rc: 1 Apr 22 13:49:01.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7021 delete -f -' Apr 22 13:49:01.850: INFO: stderr: "" Apr 22 13:49:01.850: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:49:01.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7021" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":-1,"completed":23,"skipped":557,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:49:01.956: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod Apr 22 13:49:01.979: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:49:06.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-7269" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":-1,"completed":24,"skipped":598,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:49:06.286: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-8023c666-b655-44f5-aac7-5b20c8c604b2 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 22 13:49:06.319: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-00579c9c-91d3-420d-ab76-f36f5b95cc1c" in namespace "projected-114" to be "Succeeded or Failed" Apr 22 13:49:06.322: INFO: Pod "pod-projected-secrets-00579c9c-91d3-420d-ab76-f36f5b95cc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.861292ms Apr 22 13:49:08.331: INFO: Pod "pod-projected-secrets-00579c9c-91d3-420d-ab76-f36f5b95cc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011807335s Apr 22 13:49:10.335: INFO: Pod "pod-projected-secrets-00579c9c-91d3-420d-ab76-f36f5b95cc1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015890756s �[1mSTEP�[0m: Saw pod success Apr 22 13:49:10.335: INFO: Pod "pod-projected-secrets-00579c9c-91d3-420d-ab76-f36f5b95cc1c" satisfied condition "Succeeded or Failed" Apr 22 13:49:10.338: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-projected-secrets-00579c9c-91d3-420d-ab76-f36f5b95cc1c container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:49:10.358: INFO: Waiting for pod pod-projected-secrets-00579c9c-91d3-420d-ab76-f36f5b95cc1c to disappear Apr 22 13:49:10.361: INFO: Pod pod-projected-secrets-00579c9c-91d3-420d-ab76-f36f5b95cc1c no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:49:10.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-114" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":25,"skipped":620,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:49:10.442: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should validate Deployment Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Deployment Apr 22 13:49:10.472: INFO: Creating simple deployment test-deployment-2z9wq Apr 22 13:49:10.486: INFO: deployment "test-deployment-2z9wq" doesn't have the required revision set �[1mSTEP�[0m: Getting /status Apr 22 13:49:12.506: INFO: Deployment test-deployment-2z9wq has Conditions: [{Available True 2022-04-22 13:49:11 +0000 UTC 2022-04-22 13:49:11 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2022-04-22 13:49:11 +0000 UTC 2022-04-22 13:49:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2z9wq-764bc7c4b7" has successfully progressed.}] �[1mSTEP�[0m: updating Deployment Status Apr 22 13:49:12.514: INFO: updatedStatus.Conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 49, 11, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 49, 11, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 49, 11, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 49, 10, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"test-deployment-2z9wq-764bc7c4b7\" has successfully progressed."}, v1.DeploymentCondition{Type:"StatusUpdate", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Deployment status to be updated Apr 22 13:49:12.517: INFO: Observed &Deployment event: ADDED Apr 22 13:49:12.517: INFO: Observed Deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-22 13:49:10 +0000 UTC 2022-04-22 13:49:10 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2z9wq-764bc7c4b7"} Apr 22 13:49:12.517: INFO: Observed &Deployment event: MODIFIED Apr 22 13:49:12.517: INFO: Observed Deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-22 13:49:10 +0000 UTC 2022-04-22 13:49:10 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2z9wq-764bc7c4b7"} Apr 22 13:49:12.517: INFO: Observed Deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-22 13:49:10 +0000 UTC 2022-04-22 13:49:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 22 13:49:12.518: INFO: Observed &Deployment event: MODIFIED Apr 22 13:49:12.518: INFO: Observed Deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-22 13:49:10 +0000 UTC 2022-04-22 13:49:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 22 13:49:12.518: INFO: Observed Deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-22 13:49:10 +0000 UTC 2022-04-22 13:49:10 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-2z9wq-764bc7c4b7" is progressing.} Apr 22 13:49:12.518: INFO: Observed &Deployment event: MODIFIED Apr 22 13:49:12.518: INFO: Observed Deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-22 13:49:11 +0000 UTC 2022-04-22 13:49:11 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 22 13:49:12.518: INFO: Observed Deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-22 13:49:11 +0000 UTC 2022-04-22 13:49:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2z9wq-764bc7c4b7" has successfully progressed.} Apr 22 13:49:12.518: INFO: Observed &Deployment event: MODIFIED Apr 22 13:49:12.518: INFO: Observed Deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-22 13:49:11 +0000 UTC 2022-04-22 13:49:11 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 22 13:49:12.518: INFO: Observed Deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-22 13:49:11 +0000 UTC 2022-04-22 13:49:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2z9wq-764bc7c4b7" has successfully progressed.} Apr 22 13:49:12.518: INFO: Found Deployment test-deployment-2z9wq in namespace deployment-3654 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Apr 22 13:49:12.518: INFO: Deployment test-deployment-2z9wq has an updated status �[1mSTEP�[0m: patching the Statefulset Status Apr 22 13:49:12.518: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Apr 22 13:49:12.525: INFO: Patched status conditions: []v1.DeploymentCondition{v1.DeploymentCondition{Type:"StatusPatched", Status:"True", LastUpdateTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Deployment status to be patched Apr 22 13:49:12.527: INFO: Observed &Deployment event: ADDED Apr 22 13:49:12.527: INFO: Observed deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-22 13:49:10 +0000 UTC 2022-04-22 13:49:10 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2z9wq-764bc7c4b7"} Apr 22 13:49:12.527: INFO: Observed &Deployment event: MODIFIED Apr 22 13:49:12.527: INFO: Observed deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-22 13:49:10 +0000 UTC 2022-04-22 13:49:10 +0000 UTC NewReplicaSetCreated Created new replica set "test-deployment-2z9wq-764bc7c4b7"} Apr 22 13:49:12.527: INFO: Observed deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-22 13:49:10 +0000 UTC 2022-04-22 13:49:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 22 13:49:12.527: INFO: Observed &Deployment event: MODIFIED Apr 22 13:49:12.527: INFO: Observed deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available False 2022-04-22 13:49:10 +0000 UTC 2022-04-22 13:49:10 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} Apr 22 13:49:12.527: INFO: Observed deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-22 13:49:10 +0000 UTC 2022-04-22 13:49:10 +0000 UTC ReplicaSetUpdated ReplicaSet "test-deployment-2z9wq-764bc7c4b7" is progressing.} Apr 22 13:49:12.528: INFO: Observed &Deployment event: MODIFIED Apr 22 13:49:12.528: INFO: Observed deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-22 13:49:11 +0000 UTC 2022-04-22 13:49:11 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 22 13:49:12.528: INFO: Observed deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-22 13:49:11 +0000 UTC 2022-04-22 13:49:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2z9wq-764bc7c4b7" has successfully progressed.} Apr 22 13:49:12.528: INFO: Observed &Deployment event: MODIFIED Apr 22 13:49:12.528: INFO: Observed deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Available True 2022-04-22 13:49:11 +0000 UTC 2022-04-22 13:49:11 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} Apr 22 13:49:12.528: INFO: Observed deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {Progressing True 2022-04-22 13:49:11 +0000 UTC 2022-04-22 13:49:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-deployment-2z9wq-764bc7c4b7" has successfully progressed.} Apr 22 13:49:12.528: INFO: Observed deployment test-deployment-2z9wq in namespace deployment-3654 with annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Apr 22 13:49:12.528: INFO: Observed &Deployment event: MODIFIED Apr 22 13:49:12.528: INFO: Found deployment test-deployment-2z9wq in namespace deployment-3654 with labels: map[e2e:testing name:httpd] annotations: map[deployment.kubernetes.io/revision:1] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC 0001-01-01 00:00:00 +0000 UTC } Apr 22 13:49:12.528: INFO: Deployment test-deployment-2z9wq has a patched status [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 22 13:49:12.531: INFO: Deployment "test-deployment-2z9wq": &Deployment{ObjectMeta:{test-deployment-2z9wq deployment-3654 0f487ab1-fa01-44f7-8f49-8dd9cd056c70 7759 1 2022-04-22 13:49:10 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[deployment.kubernetes.io/revision:1] [] [] [{e2e.test Update apps/v1 2022-04-22 13:49:10 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 13:49:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status} {e2e.test Update apps/v1 2022-04-22 13:49:12 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"StatusPatched\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003fc7378 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:StatusPatched,Status:True,Reason:,Message:,LastUpdateTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:0001-01-01 00:00:00 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 22 13:49:12.537: INFO: New ReplicaSet "test-deployment-2z9wq-764bc7c4b7" of Deployment "test-deployment-2z9wq": &ReplicaSet{ObjectMeta:{test-deployment-2z9wq-764bc7c4b7 deployment-3654 b23fc211-a46d-4ac1-a9b9-f48262c0837f 7754 1 2022-04-22 13:49:10 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment-2z9wq 0f487ab1-fa01-44f7-8f49-8dd9cd056c70 0xc003fc7707 0xc003fc7708}] [] [{kube-controller-manager Update apps/v1 2022-04-22 13:49:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"0f487ab1-fa01-44f7-8f49-8dd9cd056c70\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 13:49:11 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{e2e: testing,name: httpd,pod-template-hash: 764bc7c4b7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc003fc77b8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 22 13:49:12.540: INFO: Pod "test-deployment-2z9wq-764bc7c4b7-6kscj" is available: &Pod{ObjectMeta:{test-deployment-2z9wq-764bc7c4b7-6kscj test-deployment-2z9wq-764bc7c4b7- deployment-3654 6a56fd9c-c6ab-4f46-9788-0e788ff05b16 7753 0 2022-04-22 13:49:10 +0000 UTC <nil> <nil> map[e2e:testing name:httpd pod-template-hash:764bc7c4b7] map[] [{apps/v1 ReplicaSet test-deployment-2z9wq-764bc7c4b7 b23fc211-a46d-4ac1-a9b9-f48262c0837f 0xc003a81b37 0xc003a81b38}] [] [{kube-controller-manager Update v1 2022-04-22 13:49:10 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:e2e":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b23fc211-a46d-4ac1-a9b9-f48262c0837f\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-22 13:49:11 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.42\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-5rkn5,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5rkn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:49:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:49:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:49:11 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:49:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.42,StartTime:2022-04-22 13:49:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 13:49:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://6d78eb2712f8bba4acf0810592a37f6045b87b25b4f435f1478ac80eb9788709,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:49:12.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-3654" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should validate Deployment Status endpoints [Conformance]","total":-1,"completed":26,"skipped":672,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:49:12.584: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating replication controller my-hostname-basic-0a79e578-f941-40c1-86ed-4c928a765f5a Apr 22 13:49:12.616: INFO: Pod name my-hostname-basic-0a79e578-f941-40c1-86ed-4c928a765f5a: Found 0 pods out of 1 Apr 22 13:49:17.622: INFO: Pod name my-hostname-basic-0a79e578-f941-40c1-86ed-4c928a765f5a: Found 1 pods out of 1 Apr 22 13:49:17.622: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-0a79e578-f941-40c1-86ed-4c928a765f5a" are running Apr 22 13:49:17.625: INFO: Pod "my-hostname-basic-0a79e578-f941-40c1-86ed-4c928a765f5a-sj29q" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 13:49:12 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 13:49:13 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 13:49:13 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 13:49:12 +0000 UTC Reason: Message:}]) Apr 22 13:49:17.625: INFO: Trying to dial the pod Apr 22 13:49:22.636: INFO: Controller my-hostname-basic-0a79e578-f941-40c1-86ed-4c928a765f5a: Got expected result from replica 1 [my-hostname-basic-0a79e578-f941-40c1-86ed-4c928a765f5a-sj29q]: "my-hostname-basic-0a79e578-f941-40c1-86ed-4c928a765f5a-sj29q", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:49:22.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-2888" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":27,"skipped":692,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:49:22.650: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename lease-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] lease API should be available [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Lease /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:49:22.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "lease-test-1057" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":-1,"completed":28,"skipped":695,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:49:22.727: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:50:22.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-3803" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":698,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:50:22.774: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should ensure that an event can be fetched, patched, deleted, and listed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a test event �[1mSTEP�[0m: listing all events in all namespaces �[1mSTEP�[0m: patching the test event �[1mSTEP�[0m: fetching the test event �[1mSTEP�[0m: deleting the test event �[1mSTEP�[0m: listing all events in all namespaces [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:50:22.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-2275" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":-1,"completed":30,"skipped":700,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:50:22.889: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:50:22.921: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-432eb4b2-9c2b-449f-b802-f37c377557f6" in namespace "security-context-test-6567" to be "Succeeded or Failed" Apr 22 13:50:22.925: INFO: Pod "alpine-nnp-false-432eb4b2-9c2b-449f-b802-f37c377557f6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.514031ms Apr 22 13:50:24.929: INFO: Pod "alpine-nnp-false-432eb4b2-9c2b-449f-b802-f37c377557f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008222464s Apr 22 13:50:26.933: INFO: Pod "alpine-nnp-false-432eb4b2-9c2b-449f-b802-f37c377557f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012128511s Apr 22 13:50:26.933: INFO: Pod "alpine-nnp-false-432eb4b2-9c2b-449f-b802-f37c377557f6" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:50:26.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-6567" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":731,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:50:26.973: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on tmpfs Apr 22 13:50:27.003: INFO: Waiting up to 5m0s for pod "pod-a4c8e451-3db4-4dbe-949d-ac476e9bde5a" in namespace "emptydir-7986" to be "Succeeded or Failed" Apr 22 13:50:27.007: INFO: Pod "pod-a4c8e451-3db4-4dbe-949d-ac476e9bde5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.895105ms Apr 22 13:50:29.011: INFO: Pod "pod-a4c8e451-3db4-4dbe-949d-ac476e9bde5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007525799s Apr 22 13:50:31.016: INFO: Pod "pod-a4c8e451-3db4-4dbe-949d-ac476e9bde5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012548008s �[1mSTEP�[0m: Saw pod success Apr 22 13:50:31.016: INFO: Pod "pod-a4c8e451-3db4-4dbe-949d-ac476e9bde5a" satisfied condition "Succeeded or Failed" Apr 22 13:50:31.019: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod pod-a4c8e451-3db4-4dbe-949d-ac476e9bde5a container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:50:31.033: INFO: Waiting for pod pod-a4c8e451-3db4-4dbe-949d-ac476e9bde5a to disappear Apr 22 13:50:31.036: INFO: Pod pod-a4c8e451-3db4-4dbe-949d-ac476e9bde5a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:50:31.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7986" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":745,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:50:31.099: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for all pods to be garbage collected �[1mSTEP�[0m: Gathering metrics Apr 22 13:50:41.163: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr is Running (Ready = true) Apr 22 13:50:41.265: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:50:41.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-1024" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":-1,"completed":33,"skipped":792,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:50:41.302: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [It] should observe PodDisruptionBudget status updated [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for all pods to be running Apr 22 13:50:43.360: INFO: running pods: 0 < 3 [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:50:45.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-7759" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":-1,"completed":34,"skipped":811,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:50:45.407: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename server-version �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should find the server version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Request ServerVersion �[1mSTEP�[0m: Confirm major version Apr 22 13:50:45.434: INFO: Major version: 1 �[1mSTEP�[0m: Confirm minor version Apr 22 13:50:45.434: INFO: cleanMinorVersion: 23 Apr 22 13:50:45.434: INFO: Minor version: 23 [AfterEach] [sig-api-machinery] server version /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:50:45.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "server-version-8841" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":-1,"completed":35,"skipped":835,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:39.439: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod liveness-d1914eac-0fb7-47fc-a0f7-9a687264b4e5 in namespace container-probe-2737 Apr 22 13:48:41.484: INFO: Started pod liveness-d1914eac-0fb7-47fc-a0f7-9a687264b4e5 in namespace container-probe-2737 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Apr 22 13:48:41.489: INFO: Initial restart count of pod liveness-d1914eac-0fb7-47fc-a0f7-9a687264b4e5 is 0 Apr 22 13:49:01.569: INFO: Restart count of pod container-probe-2737/liveness-d1914eac-0fb7-47fc-a0f7-9a687264b4e5 is now 1 (20.080673924s elapsed) Apr 22 13:49:21.629: INFO: Restart count of pod container-probe-2737/liveness-d1914eac-0fb7-47fc-a0f7-9a687264b4e5 is now 2 (40.140325303s elapsed) Apr 22 13:49:41.673: INFO: Restart count of pod container-probe-2737/liveness-d1914eac-0fb7-47fc-a0f7-9a687264b4e5 is now 3 (1m0.184681681s elapsed) Apr 22 13:50:01.719: INFO: Restart count of pod container-probe-2737/liveness-d1914eac-0fb7-47fc-a0f7-9a687264b4e5 is now 4 (1m20.230543572s elapsed) Apr 22 13:51:01.865: INFO: Restart count of pod container-probe-2737/liveness-d1914eac-0fb7-47fc-a0f7-9a687264b4e5 is now 5 (2m20.375934781s elapsed) �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:01.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-2737" for this suite. �[32m• [SLOW TEST:142.448 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should have monotonically increasing restart count [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":-1,"completed":24,"skipped":453,"failed":0} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:01.893: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1539 [It] should create a pod from an image when restart is Never [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Apr 22 13:51:01.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7772 run e2e-test-httpd-pod --restart=Never --pod-running-timeout=2m0s --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2' Apr 22 13:51:02.014: INFO: stderr: "" Apr 22 13:51:02.014: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1543 Apr 22 13:51:02.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-7772 delete pods e2e-test-httpd-pod' Apr 22 13:51:03.840: INFO: stderr: "" Apr 22 13:51:03.840: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:03.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-7772" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":-1,"completed":25,"skipped":455,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:03.883: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-93178482-871b-4046-a892-8ae7556b82b5 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 22 13:51:03.916: INFO: Waiting up to 5m0s for pod "pod-secrets-30628155-ddf2-40c5-b53b-60e49a634360" in namespace "secrets-1616" to be "Succeeded or Failed" Apr 22 13:51:03.919: INFO: Pod "pod-secrets-30628155-ddf2-40c5-b53b-60e49a634360": Phase="Pending", Reason="", readiness=false. Elapsed: 3.06349ms Apr 22 13:51:05.924: INFO: Pod "pod-secrets-30628155-ddf2-40c5-b53b-60e49a634360": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007957859s Apr 22 13:51:07.929: INFO: Pod "pod-secrets-30628155-ddf2-40c5-b53b-60e49a634360": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01298063s �[1mSTEP�[0m: Saw pod success Apr 22 13:51:07.929: INFO: Pod "pod-secrets-30628155-ddf2-40c5-b53b-60e49a634360" satisfied condition "Succeeded or Failed" Apr 22 13:51:07.932: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-secrets-30628155-ddf2-40c5-b53b-60e49a634360 container secret-env-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:51:07.951: INFO: Waiting for pod pod-secrets-30628155-ddf2-40c5-b53b-60e49a634360 to disappear Apr 22 13:51:07.954: INFO: Pod pod-secrets-30628155-ddf2-40c5-b53b-60e49a634360 no longer exists [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:07.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-1616" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":-1,"completed":26,"skipped":484,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:08.006: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:51:08.034: INFO: The status of Pod busybox-readonly-fsab3e7dd3-9b7e-4372-922c-7fcbe2301038 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:51:10.039: INFO: The status of Pod busybox-readonly-fsab3e7dd3-9b7e-4372-922c-7fcbe2301038 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:10.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-7305" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":27,"skipped":522,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:10.061: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should guarantee kube-root-ca.crt exist in any namespace [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:51:10.090: INFO: Got root ca configmap in namespace "svcaccounts-2390" Apr 22 13:51:10.093: INFO: Deleted root ca configmap in namespace "svcaccounts-2390" �[1mSTEP�[0m: waiting for a new root ca configmap created Apr 22 13:51:10.598: INFO: Recreated root ca configmap in namespace "svcaccounts-2390" Apr 22 13:51:10.602: INFO: Updated root ca configmap in namespace "svcaccounts-2390" �[1mSTEP�[0m: waiting for the root ca configmap reconciled Apr 22 13:51:11.107: INFO: Reconciled root ca configmap in namespace "svcaccounts-2390" [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:11.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-2390" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":-1,"completed":28,"skipped":526,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:50:45.480: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-6171 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Apr 22 13:50:45.496: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 13:50:45.536: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:50:47.541: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:50:49.541: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:50:51.541: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:50:53.540: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:50:55.541: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:50:57.540: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:50:59.541: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:51:01.541: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:51:03.542: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:51:05.541: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 22 13:51:05.550: INFO: The status of Pod netserver-1 is Running (Ready = true) Apr 22 13:51:05.558: INFO: The status of Pod netserver-2 is Running (Ready = true) Apr 22 13:51:05.563: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Apr 22 13:51:07.592: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Apr 22 13:51:07.592: INFO: Going to poll 192.168.2.48 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 22 13:51:07.594: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.2.48 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6171 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 13:51:07.594: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:51:07.595: INFO: ExecWithOptions: Clientset creation Apr 22 13:51:07.595: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6171/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.2.48+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 22 13:51:08.688: INFO: Found all 1 expected endpoints: [netserver-0] Apr 22 13:51:08.688: INFO: Going to poll 192.168.0.18 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 22 13:51:08.691: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.0.18 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6171 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 13:51:08.691: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:51:08.692: INFO: ExecWithOptions: Clientset creation Apr 22 13:51:08.692: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6171/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.0.18+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 22 13:51:09.763: INFO: Found all 1 expected endpoints: [netserver-1] Apr 22 13:51:09.763: INFO: Going to poll 192.168.3.22 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 22 13:51:09.766: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.3.22 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6171 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 13:51:09.766: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:51:09.767: INFO: ExecWithOptions: Clientset creation Apr 22 13:51:09.767: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6171/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.3.22+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 22 13:51:10.850: INFO: Found all 1 expected endpoints: [netserver-2] Apr 22 13:51:10.850: INFO: Going to poll 192.168.4.17 on port 8081 at least 0 times, with a maximum of 46 tries before failing Apr 22 13:51:10.856: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 192.168.4.17 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6171 PodName:host-test-container-pod ContainerName:agnhost-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 13:51:10.856: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:51:10.857: INFO: ExecWithOptions: Clientset creation Apr 22 13:51:10.857: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-6171/pods/host-test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=echo+hostName+%7C+nc+-w+1+-u+192.168.4.17+8081+%7C+grep+-v+%27%5E%5Cs%2A%24%27&container=agnhost-container&container=agnhost-container&stderr=true&stdout=true %!s(MISSING)) Apr 22 13:51:11.943: INFO: Found all 1 expected endpoints: [netserver-3] [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:11.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-6171" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":36,"skipped":862,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:11.157: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Apr 22 13:51:11.181: INFO: Waiting up to 5m0s for pod "pod-e37c6e96-b898-4c5c-8d81-a7db045eebc7" in namespace "emptydir-7543" to be "Succeeded or Failed" Apr 22 13:51:11.184: INFO: Pod "pod-e37c6e96-b898-4c5c-8d81-a7db045eebc7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.092007ms Apr 22 13:51:13.189: INFO: Pod "pod-e37c6e96-b898-4c5c-8d81-a7db045eebc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008392555s Apr 22 13:51:15.194: INFO: Pod "pod-e37c6e96-b898-4c5c-8d81-a7db045eebc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012645624s �[1mSTEP�[0m: Saw pod success Apr 22 13:51:15.194: INFO: Pod "pod-e37c6e96-b898-4c5c-8d81-a7db045eebc7" satisfied condition "Succeeded or Failed" Apr 22 13:51:15.196: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-kmb2d pod pod-e37c6e96-b898-4c5c-8d81-a7db045eebc7 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:51:15.214: INFO: Waiting for pod pod-e37c6e96-b898-4c5c-8d81-a7db045eebc7 to disappear Apr 22 13:51:15.217: INFO: Pod pod-e37c6e96-b898-4c5c-8d81-a7db045eebc7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:15.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7543" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":29,"skipped":556,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:11.967: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:51:11.997: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f54e198d-e577-4480-8c08-177fab0d8677" in namespace "projected-1581" to be "Succeeded or Failed" Apr 22 13:51:12.001: INFO: Pod "downwardapi-volume-f54e198d-e577-4480-8c08-177fab0d8677": Phase="Pending", Reason="", readiness=false. Elapsed: 3.627707ms Apr 22 13:51:14.006: INFO: Pod "downwardapi-volume-f54e198d-e577-4480-8c08-177fab0d8677": Phase="Running", Reason="", readiness=false. Elapsed: 2.007837279s Apr 22 13:51:16.010: INFO: Pod "downwardapi-volume-f54e198d-e577-4480-8c08-177fab0d8677": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01229097s �[1mSTEP�[0m: Saw pod success Apr 22 13:51:16.010: INFO: Pod "downwardapi-volume-f54e198d-e577-4480-8c08-177fab0d8677" satisfied condition "Succeeded or Failed" Apr 22 13:51:16.014: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod downwardapi-volume-f54e198d-e577-4480-8c08-177fab0d8677 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:51:16.033: INFO: Waiting for pod downwardapi-volume-f54e198d-e577-4480-8c08-177fab0d8677 to disappear Apr 22 13:51:16.035: INFO: Pod downwardapi-volume-f54e198d-e577-4480-8c08-177fab0d8677 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:16.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1581" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":867,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:16.091: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should support proxy with --port 0 [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: starting the proxy server Apr 22 13:51:16.114: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5316 proxy -p 0 --disable-filter' �[1mSTEP�[0m: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:16.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5316" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":-1,"completed":38,"skipped":899,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:15.258: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:51:15.277: INFO: Creating ReplicaSet my-hostname-basic-0673c1a0-893c-4bbe-9209-3f899b52c0b1 Apr 22 13:51:15.285: INFO: Pod name my-hostname-basic-0673c1a0-893c-4bbe-9209-3f899b52c0b1: Found 0 pods out of 1 Apr 22 13:51:20.298: INFO: Pod name my-hostname-basic-0673c1a0-893c-4bbe-9209-3f899b52c0b1: Found 1 pods out of 1 Apr 22 13:51:20.298: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-0673c1a0-893c-4bbe-9209-3f899b52c0b1" is running Apr 22 13:51:20.302: INFO: Pod "my-hostname-basic-0673c1a0-893c-4bbe-9209-3f899b52c0b1-ww9h8" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 13:51:15 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 13:51:16 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 13:51:16 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2022-04-22 13:51:15 +0000 UTC Reason: Message:}]) Apr 22 13:51:20.302: INFO: Trying to dial the pod Apr 22 13:51:25.313: INFO: Controller my-hostname-basic-0673c1a0-893c-4bbe-9209-3f899b52c0b1: Got expected result from replica 1 [my-hostname-basic-0673c1a0-893c-4bbe-9209-3f899b52c0b1-ww9h8]: "my-hostname-basic-0673c1a0-893c-4bbe-9209-3f899b52c0b1-ww9h8", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:25.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-2002" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":-1,"completed":30,"skipped":577,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:25.330: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:29.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-3676" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":-1,"completed":31,"skipped":582,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:29.496: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:51:29.522: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f2270904-02d8-4bc1-a9f8-76086179d4d6" in namespace "projected-6202" to be "Succeeded or Failed" Apr 22 13:51:29.526: INFO: Pod "downwardapi-volume-f2270904-02d8-4bc1-a9f8-76086179d4d6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.503045ms Apr 22 13:51:31.531: INFO: Pod "downwardapi-volume-f2270904-02d8-4bc1-a9f8-76086179d4d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009145667s Apr 22 13:51:33.536: INFO: Pod "downwardapi-volume-f2270904-02d8-4bc1-a9f8-76086179d4d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013779399s �[1mSTEP�[0m: Saw pod success Apr 22 13:51:33.536: INFO: Pod "downwardapi-volume-f2270904-02d8-4bc1-a9f8-76086179d4d6" satisfied condition "Succeeded or Failed" Apr 22 13:51:33.540: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod downwardapi-volume-f2270904-02d8-4bc1-a9f8-76086179d4d6 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:51:33.557: INFO: Waiting for pod downwardapi-volume-f2270904-02d8-4bc1-a9f8-76086179d4d6 to disappear Apr 22 13:51:33.559: INFO: Pod downwardapi-volume-f2270904-02d8-4bc1-a9f8-76086179d4d6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:33.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6202" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":32,"skipped":669,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:16.245: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-configmap-gpp7 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 22 13:51:16.279: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gpp7" in namespace "subpath-4490" to be "Succeeded or Failed" Apr 22 13:51:16.285: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.481064ms Apr 22 13:51:18.292: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Running", Reason="", readiness=true. Elapsed: 2.012893941s Apr 22 13:51:20.298: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Running", Reason="", readiness=true. Elapsed: 4.018395803s Apr 22 13:51:22.318: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Running", Reason="", readiness=true. Elapsed: 6.03922418s Apr 22 13:51:24.323: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Running", Reason="", readiness=true. Elapsed: 8.04367526s Apr 22 13:51:26.327: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Running", Reason="", readiness=true. Elapsed: 10.048274467s Apr 22 13:51:28.331: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Running", Reason="", readiness=true. Elapsed: 12.052051377s Apr 22 13:51:30.335: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Running", Reason="", readiness=true. Elapsed: 14.056093368s Apr 22 13:51:32.343: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Running", Reason="", readiness=true. Elapsed: 16.063991976s Apr 22 13:51:34.347: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Running", Reason="", readiness=true. Elapsed: 18.067702456s Apr 22 13:51:36.352: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Running", Reason="", readiness=true. Elapsed: 20.072466229s Apr 22 13:51:38.356: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Running", Reason="", readiness=false. Elapsed: 22.077119118s Apr 22 13:51:40.361: INFO: Pod "pod-subpath-test-configmap-gpp7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.081545174s �[1mSTEP�[0m: Saw pod success Apr 22 13:51:40.361: INFO: Pod "pod-subpath-test-configmap-gpp7" satisfied condition "Succeeded or Failed" Apr 22 13:51:40.365: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod pod-subpath-test-configmap-gpp7 container test-container-subpath-configmap-gpp7: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:51:40.381: INFO: Waiting for pod pod-subpath-test-configmap-gpp7 to disappear Apr 22 13:51:40.386: INFO: Pod pod-subpath-test-configmap-gpp7 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-configmap-gpp7 Apr 22 13:51:40.386: INFO: Deleting pod "pod-subpath-test-configmap-gpp7" in namespace "subpath-4490" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:40.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-4490" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":39,"skipped":923,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:33.606: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-1405 �[1mSTEP�[0m: creating service affinity-clusterip in namespace services-1405 �[1mSTEP�[0m: creating replication controller affinity-clusterip in namespace services-1405 I0422 13:51:33.640058 17 runners.go:193] Created replication controller with name: affinity-clusterip, namespace: services-1405, replica count: 3 I0422 13:51:36.691860 17 runners.go:193] affinity-clusterip Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 13:51:36.697: INFO: Creating new exec pod Apr 22 13:51:39.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1405 exec execpod-affinity4d2q8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-clusterip 80' Apr 22 13:51:39.898: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-clusterip 80\nConnection to affinity-clusterip 80 port [tcp/http] succeeded!\n" Apr 22 13:51:39.898: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 13:51:39.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1405 exec execpod-affinity4d2q8 -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.131.171.248 80' Apr 22 13:51:40.065: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.131.171.248 80\nConnection to 10.131.171.248 80 port [tcp/http] succeeded!\n" Apr 22 13:51:40.065: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 13:51:40.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-1405 exec execpod-affinity4d2q8 -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://10.131.171.248:80/ ; done' Apr 22 13:51:40.322: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n+ echo\n+ curl -q -s --connect-timeout 2 http://10.131.171.248:80/\n" Apr 22 13:51:40.322: INFO: stdout: "\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2\naffinity-clusterip-w27h2" Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Received response from host: affinity-clusterip-w27h2 Apr 22 13:51:40.322: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-clusterip in namespace services-1405, will wait for the garbage collector to delete the pods Apr 22 13:51:40.390: INFO: Deleting ReplicationController affinity-clusterip took: 5.995246ms Apr 22 13:51:40.492: INFO: Terminating ReplicationController affinity-clusterip pods took: 101.790096ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:42.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-1405" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:40.407: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0666 on node default medium Apr 22 13:51:40.468: INFO: Waiting up to 5m0s for pod "pod-9fea5b74-a168-44ed-a29e-524d6f3b7e48" in namespace "emptydir-5755" to be "Succeeded or Failed" Apr 22 13:51:40.472: INFO: Pod "pod-9fea5b74-a168-44ed-a29e-524d6f3b7e48": Phase="Pending", Reason="", readiness=false. Elapsed: 3.898417ms Apr 22 13:51:42.475: INFO: Pod "pod-9fea5b74-a168-44ed-a29e-524d6f3b7e48": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0073619s Apr 22 13:51:44.480: INFO: Pod "pod-9fea5b74-a168-44ed-a29e-524d6f3b7e48": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011508713s �[1mSTEP�[0m: Saw pod success Apr 22 13:51:44.480: INFO: Pod "pod-9fea5b74-a168-44ed-a29e-524d6f3b7e48" satisfied condition "Succeeded or Failed" Apr 22 13:51:44.482: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-9fea5b74-a168-44ed-a29e-524d6f3b7e48 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:51:44.496: INFO: Waiting for pod pod-9fea5b74-a168-44ed-a29e-524d6f3b7e48 to disappear Apr 22 13:51:44.499: INFO: Pod pod-9fea5b74-a168-44ed-a29e-524d6f3b7e48 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:44.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5755" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":926,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:44.549: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CronJob API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a cronjob �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 22 13:51:44.584: INFO: starting watch �[1mSTEP�[0m: cluster-wide listing �[1mSTEP�[0m: cluster-wide watching Apr 22 13:51:44.589: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 22 13:51:44.603: INFO: waiting for watch events with expected annotations Apr 22 13:51:44.603: INFO: saw patched and updated annotations �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: get /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:44.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-7944" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":-1,"completed":41,"skipped":959,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:44.673: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename certificates �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support CSR API operations [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting /apis �[1mSTEP�[0m: getting /apis/certificates.k8s.io �[1mSTEP�[0m: getting /apis/certificates.k8s.io/v1 �[1mSTEP�[0m: creating �[1mSTEP�[0m: getting �[1mSTEP�[0m: listing �[1mSTEP�[0m: watching Apr 22 13:51:45.660: INFO: starting watch �[1mSTEP�[0m: patching �[1mSTEP�[0m: updating Apr 22 13:51:45.672: INFO: waiting for watch events with expected annotations Apr 22 13:51:45.672: INFO: saw patched and updated annotations �[1mSTEP�[0m: getting /approval �[1mSTEP�[0m: patching /approval �[1mSTEP�[0m: updating /approval �[1mSTEP�[0m: getting /status �[1mSTEP�[0m: patching /status �[1mSTEP�[0m: updating /status �[1mSTEP�[0m: deleting �[1mSTEP�[0m: deleting a collection [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:45.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "certificates-2069" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":-1,"completed":42,"skipped":978,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":-1,"completed":33,"skipped":700,"failed":0} [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:42.819: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on tmpfs Apr 22 13:51:42.853: INFO: Waiting up to 5m0s for pod "pod-63e43303-37da-4b10-b686-9d39eea69839" in namespace "emptydir-7943" to be "Succeeded or Failed" Apr 22 13:51:42.856: INFO: Pod "pod-63e43303-37da-4b10-b686-9d39eea69839": Phase="Pending", Reason="", readiness=false. Elapsed: 3.047403ms Apr 22 13:51:44.862: INFO: Pod "pod-63e43303-37da-4b10-b686-9d39eea69839": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008352894s Apr 22 13:51:46.866: INFO: Pod "pod-63e43303-37da-4b10-b686-9d39eea69839": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012787515s �[1mSTEP�[0m: Saw pod success Apr 22 13:51:46.866: INFO: Pod "pod-63e43303-37da-4b10-b686-9d39eea69839" satisfied condition "Succeeded or Failed" Apr 22 13:51:46.869: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod pod-63e43303-37da-4b10-b686-9d39eea69839 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:51:46.882: INFO: Waiting for pod pod-63e43303-37da-4b10-b686-9d39eea69839 to disappear Apr 22 13:51:46.885: INFO: Pod pod-63e43303-37da-4b10-b686-9d39eea69839 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:46.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7943" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":700,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:45.788: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-822c220a-eab2-43d3-aa68-6e01977d5ec0 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 13:51:45.815: INFO: Waiting up to 5m0s for pod "pod-configmaps-d94b1035-9372-48ed-aae9-c2b473322f67" in namespace "configmap-8334" to be "Succeeded or Failed" Apr 22 13:51:45.817: INFO: Pod "pod-configmaps-d94b1035-9372-48ed-aae9-c2b473322f67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.604528ms Apr 22 13:51:47.823: INFO: Pod "pod-configmaps-d94b1035-9372-48ed-aae9-c2b473322f67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008313047s Apr 22 13:51:49.827: INFO: Pod "pod-configmaps-d94b1035-9372-48ed-aae9-c2b473322f67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012288556s �[1mSTEP�[0m: Saw pod success Apr 22 13:51:49.827: INFO: Pod "pod-configmaps-d94b1035-9372-48ed-aae9-c2b473322f67" satisfied condition "Succeeded or Failed" Apr 22 13:51:49.830: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod pod-configmaps-d94b1035-9372-48ed-aae9-c2b473322f67 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:51:49.844: INFO: Waiting for pod pod-configmaps-d94b1035-9372-48ed-aae9-c2b473322f67 to disappear Apr 22 13:51:49.846: INFO: Pod pod-configmaps-d94b1035-9372-48ed-aae9-c2b473322f67 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:49.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-8334" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":1024,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:46.933: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should run through the lifecycle of Pods and PodStatus [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Pod with a static label �[1mSTEP�[0m: watching for Pod to be ready Apr 22 13:51:46.964: INFO: observed Pod pod-test in namespace pods-5563 in phase Pending with labels: map[test-pod-static:true] & conditions [] Apr 22 13:51:46.967: INFO: observed Pod pod-test in namespace pods-5563 in phase Pending with labels: map[test-pod-static:true] & conditions [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:51:46 +0000 UTC }] Apr 22 13:51:46.978: INFO: observed Pod pod-test in namespace pods-5563 in phase Pending with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:51:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:51:46 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:51:46 +0000 UTC ContainersNotReady containers with unready status: [pod-test]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:51:46 +0000 UTC }] Apr 22 13:51:47.971: INFO: Found Pod pod-test in namespace pods-5563 in phase Running with labels: map[test-pod-static:true] & conditions [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:51:46 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:51:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:51:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:51:46 +0000 UTC }] �[1mSTEP�[0m: patching the Pod with a new Label and updated data Apr 22 13:51:47.982: INFO: observed event type ADDED �[1mSTEP�[0m: getting the Pod and ensuring that it's patched �[1mSTEP�[0m: replacing the Pod's status Ready condition to False �[1mSTEP�[0m: check the Pod again to ensure its Ready conditions are False �[1mSTEP�[0m: deleting the Pod via a Collection with a LabelSelector �[1mSTEP�[0m: watching for the Pod to be deleted Apr 22 13:51:48.005: INFO: observed event type ADDED Apr 22 13:51:48.005: INFO: observed event type MODIFIED Apr 22 13:51:48.005: INFO: observed event type MODIFIED Apr 22 13:51:48.005: INFO: observed event type MODIFIED Apr 22 13:51:48.006: INFO: observed event type MODIFIED Apr 22 13:51:48.006: INFO: observed event type MODIFIED Apr 22 13:51:48.006: INFO: observed event type MODIFIED Apr 22 13:51:48.503: INFO: observed event type MODIFIED Apr 22 13:51:50.981: INFO: observed event type MODIFIED Apr 22 13:51:50.987: INFO: observed event type MODIFIED [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:50.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-5563" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":-1,"completed":35,"skipped":730,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:51.022: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename podtemplate �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should run the lifecycle of PodTemplates [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] PodTemplates /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:51.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "podtemplate-9996" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":-1,"completed":36,"skipped":742,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:51.079: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name projected-secret-test-7091fd92-5a1e-4486-afa0-f6439eed9b16 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 22 13:51:51.112: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8286a818-b387-42a6-ad64-213de1ef9cd0" in namespace "projected-9893" to be "Succeeded or Failed" Apr 22 13:51:51.115: INFO: Pod "pod-projected-secrets-8286a818-b387-42a6-ad64-213de1ef9cd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.907697ms Apr 22 13:51:53.119: INFO: Pod "pod-projected-secrets-8286a818-b387-42a6-ad64-213de1ef9cd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006915629s Apr 22 13:51:55.124: INFO: Pod "pod-projected-secrets-8286a818-b387-42a6-ad64-213de1ef9cd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011657565s �[1mSTEP�[0m: Saw pod success Apr 22 13:51:55.124: INFO: Pod "pod-projected-secrets-8286a818-b387-42a6-ad64-213de1ef9cd0" satisfied condition "Succeeded or Failed" Apr 22 13:51:55.127: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-projected-secrets-8286a818-b387-42a6-ad64-213de1ef9cd0 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:51:55.142: INFO: Waiting for pod pod-projected-secrets-8286a818-b387-42a6-ad64-213de1ef9cd0 to disappear Apr 22 13:51:55.145: INFO: Pod pod-projected-secrets-8286a818-b387-42a6-ad64-213de1ef9cd0 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:55.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9893" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":-1,"completed":37,"skipped":748,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:49.863: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should delete a collection of pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of pods Apr 22 13:51:49.889: INFO: created test-pod-1 Apr 22 13:51:51.896: INFO: running and ready test-pod-1 Apr 22 13:51:51.899: INFO: created test-pod-2 Apr 22 13:51:53.916: INFO: running and ready test-pod-2 Apr 22 13:51:53.922: INFO: created test-pod-3 Apr 22 13:51:55.929: INFO: running and ready test-pod-3 �[1mSTEP�[0m: waiting for all 3 pods to be located �[1mSTEP�[0m: waiting for all pods to be deleted Apr 22 13:51:55.960: INFO: Pod quantity 3 is different from expected quantity 0 Apr 22 13:51:56.964: INFO: Pod quantity 2 is different from expected quantity 0 Apr 22 13:51:57.964: INFO: Pod quantity 1 is different from expected quantity 0 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:58.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8481" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":-1,"completed":44,"skipped":1030,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:55.167: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:51:55.194: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb962a93-b725-4ffd-b1de-a8054cbc7476" in namespace "downward-api-3526" to be "Succeeded or Failed" Apr 22 13:51:55.197: INFO: Pod "downwardapi-volume-eb962a93-b725-4ffd-b1de-a8054cbc7476": Phase="Pending", Reason="", readiness=false. Elapsed: 2.773072ms Apr 22 13:51:57.202: INFO: Pod "downwardapi-volume-eb962a93-b725-4ffd-b1de-a8054cbc7476": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007439903s Apr 22 13:51:59.206: INFO: Pod "downwardapi-volume-eb962a93-b725-4ffd-b1de-a8054cbc7476": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011784055s �[1mSTEP�[0m: Saw pod success Apr 22 13:51:59.206: INFO: Pod "downwardapi-volume-eb962a93-b725-4ffd-b1de-a8054cbc7476" satisfied condition "Succeeded or Failed" Apr 22 13:51:59.209: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod downwardapi-volume-eb962a93-b725-4ffd-b1de-a8054cbc7476 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:51:59.222: INFO: Waiting for pod downwardapi-volume-eb962a93-b725-4ffd-b1de-a8054cbc7476 to disappear Apr 22 13:51:59.224: INFO: Pod downwardapi-volume-eb962a93-b725-4ffd-b1de-a8054cbc7476 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:51:59.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-3526" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":-1,"completed":38,"skipped":759,"failed":0} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:59.086: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename init-container �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod Apr 22 13:51:59.106: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [sig-node] InitContainer [NodeConformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:52:02.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "init-container-1443" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":-1,"completed":45,"skipped":1111,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:52:02.126: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should complete a service status lifecycle [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Service �[1mSTEP�[0m: watching for the Service to be added Apr 22 13:52:02.169: INFO: Found Service test-service-ldj8z in namespace services-5018 with labels: map[test-service-static:true] & ports [{http TCP <nil> 80 {0 80 } 0}] Apr 22 13:52:02.169: INFO: Service test-service-ldj8z created �[1mSTEP�[0m: Getting /status Apr 22 13:52:02.173: INFO: Service test-service-ldj8z has LoadBalancer: {[]} �[1mSTEP�[0m: patching the ServiceStatus �[1mSTEP�[0m: watching for the Service to be patched Apr 22 13:52:02.181: INFO: observed Service test-service-ldj8z in namespace services-5018 with annotations: map[] & LoadBalancer: {[]} Apr 22 13:52:02.181: INFO: Found Service test-service-ldj8z in namespace services-5018 with annotations: map[patchedstatus:true] & LoadBalancer: {[{203.0.113.1 []}]} Apr 22 13:52:02.181: INFO: Service test-service-ldj8z has service status patched �[1mSTEP�[0m: updating the ServiceStatus Apr 22 13:52:02.188: INFO: updatedStatus.Conditions: []v1.Condition{v1.Condition{Type:"StatusUpdate", Status:"True", ObservedGeneration:0, LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the Service to be updated Apr 22 13:52:02.190: INFO: Observed Service test-service-ldj8z in namespace services-5018 with annotations: map[] & Conditions: {[]} Apr 22 13:52:02.191: INFO: Observed event: &Service{ObjectMeta:{test-service-ldj8z services-5018 b78c55f7-17b6-4925-a982-4c069648382a 9247 0 2022-04-22 13:52:02 +0000 UTC <nil> <nil> map[test-service-static:true] map[patchedstatus:true] [] [] [{e2e.test Update v1 2022-04-22 13:52:02 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:test-service-static":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} } {e2e.test Update v1 2022-04-22 13:52:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:patchedstatus":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:http,Protocol:TCP,Port:80,TargetPort:{0 80 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.130.225.73,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.130.225.73],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:203.0.113.1,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},} Apr 22 13:52:02.191: INFO: Found Service test-service-ldj8z in namespace services-5018 with annotations: map[patchedstatus:true] & Conditions: [{StatusUpdate True 0 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 22 13:52:02.191: INFO: Service test-service-ldj8z has service status updated �[1mSTEP�[0m: patching the service �[1mSTEP�[0m: watching for the Service to be patched Apr 22 13:52:02.205: INFO: observed Service test-service-ldj8z in namespace services-5018 with labels: map[test-service-static:true] Apr 22 13:52:02.205: INFO: observed Service test-service-ldj8z in namespace services-5018 with labels: map[test-service-static:true] Apr 22 13:52:02.205: INFO: observed Service test-service-ldj8z in namespace services-5018 with labels: map[test-service-static:true] Apr 22 13:52:02.205: INFO: Found Service test-service-ldj8z in namespace services-5018 with labels: map[test-service:patched test-service-static:true] Apr 22 13:52:02.205: INFO: Service test-service-ldj8z patched �[1mSTEP�[0m: deleting the service �[1mSTEP�[0m: watching for the Service to be deleted Apr 22 13:52:02.221: INFO: Observed event: ADDED Apr 22 13:52:02.221: INFO: Observed event: MODIFIED Apr 22 13:52:02.221: INFO: Observed event: MODIFIED Apr 22 13:52:02.221: INFO: Observed event: MODIFIED Apr 22 13:52:02.221: INFO: Found Service test-service-ldj8z in namespace services-5018 with labels: map[test-service:patched test-service-static:true] & annotations: map[patchedstatus:true] Apr 22 13:52:02.221: INFO: Service test-service-ldj8z deleted [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:52:02.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-5018" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":-1,"completed":46,"skipped":1175,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:52:02.232: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replication-controller �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/rc.go:54 [It] should adopt matching pods on creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption is created Apr 22 13:52:02.262: INFO: The status of Pod pod-adoption is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:52:04.265: INFO: The status of Pod pod-adoption is Running (Ready = true) �[1mSTEP�[0m: When a replication controller with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:52:05.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replication-controller-8004" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":-1,"completed":47,"skipped":1175,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:51:59.237: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:51:59.568: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:52:02.589: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:52:02.593: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-60-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:52:05.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4319" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4319-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":-1,"completed":39,"skipped":760,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:52:05.738: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir volume type on node default medium Apr 22 13:52:05.785: INFO: Waiting up to 5m0s for pod "pod-8a269739-bd26-467f-adb2-3ddfdcbef4d9" in namespace "emptydir-7436" to be "Succeeded or Failed" Apr 22 13:52:05.799: INFO: Pod "pod-8a269739-bd26-467f-adb2-3ddfdcbef4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 13.734969ms Apr 22 13:52:07.804: INFO: Pod "pod-8a269739-bd26-467f-adb2-3ddfdcbef4d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018893827s Apr 22 13:52:09.809: INFO: Pod "pod-8a269739-bd26-467f-adb2-3ddfdcbef4d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023788456s �[1mSTEP�[0m: Saw pod success Apr 22 13:52:09.809: INFO: Pod "pod-8a269739-bd26-467f-adb2-3ddfdcbef4d9" satisfied condition "Succeeded or Failed" Apr 22 13:52:09.812: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod pod-8a269739-bd26-467f-adb2-3ddfdcbef4d9 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:52:09.826: INFO: Waiting for pod pod-8a269739-bd26-467f-adb2-3ddfdcbef4d9 to disappear Apr 22 13:52:09.829: INFO: Pod pod-8a269739-bd26-467f-adb2-3ddfdcbef4d9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:52:09.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-7436" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":40,"skipped":778,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:52:05.301: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Apr 22 13:52:05.331: INFO: The status of Pod annotationupdate4918035f-e3e9-4524-a591-e134b71ec7d1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:52:07.334: INFO: The status of Pod annotationupdate4918035f-e3e9-4524-a591-e134b71ec7d1 is Running (Ready = true) Apr 22 13:52:07.857: INFO: Successfully updated pod "annotationupdate4918035f-e3e9-4524-a591-e134b71ec7d1" [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:52:11.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-2973" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":48,"skipped":1185,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:52:11.937: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename prestop �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:157 [It] should call prestop when killing a pod [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating server pod server in namespace prestop-8555 �[1mSTEP�[0m: Waiting for pods to come up. �[1mSTEP�[0m: Creating tester pod tester in namespace prestop-8555 �[1mSTEP�[0m: Deleting pre-stop pod Apr 22 13:52:21.003: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } �[1mSTEP�[0m: Deleting the server pod [AfterEach] [sig-node] PreStop /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:52:21.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "prestop-8555" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":-1,"completed":49,"skipped":1221,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:52:21.027: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with configMap that has name projected-configmap-test-upd-9bd75d97-d73c-4138-a2eb-c1eb04f59afa �[1mSTEP�[0m: Creating the pod Apr 22 13:52:21.077: INFO: The status of Pod pod-projected-configmaps-ab3d03b6-51a0-41c7-ac84-e82b6148c1f7 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:52:23.082: INFO: The status of Pod pod-projected-configmaps-ab3d03b6-51a0-41c7-ac84-e82b6148c1f7 is Running (Ready = true) �[1mSTEP�[0m: Updating configmap projected-configmap-test-upd-9bd75d97-d73c-4138-a2eb-c1eb04f59afa �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:52:25.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-4913" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1222,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:48:39.134: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-601.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-601.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-601.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-601.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 22 13:52:20.521: INFO: Unable to read wheezy_hosts@dns-querier-2.dns-test-service-2.dns-601.svc.cluster.local from pod dns-601/dns-test-f14c904e-be15-43a0-8264-3cfb4ebc073c: the server is currently unable to handle the request (get pods dns-test-f14c904e-be15-43a0-8264-3cfb4ebc073c) Apr 22 13:53:47.188: FAIL: Unable to read wheezy_hosts@dns-querier-2 from pod dns-601/dns-test-f14c904e-be15-43a0-8264-3cfb4ebc073c: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-601/pods/dns-test-f14c904e-be15-43a0-8264-3cfb4ebc073c/proxy/results/wheezy_hosts@dns-querier-2": context deadline exceeded Full Stack Trace k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f205468fb78, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7a11748, 0xc000056080}, 0xc005293ab0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7a11748, 0xc000056080}, 0xf0, 0x2ce3745, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7a11748, 0xc000056080}, 0x4a, 0xc005293b40, 0x245df47) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78f7e00, 0xc00005c880, 0xc005293b88) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc004c52d80, 0x4, 0x4}, {0x7117e94, 0x7}, 0xc004ba5000, {0x7b442b0, 0xc004d30180}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00085e9a0, 0xc004ba5000, {0xc004c52d80, 0x4, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.7() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:279 +0x8b4 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2456919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000ada680, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a E0422 13:53:47.192617 19 runtime.go:78] Observed a panic: ginkgowrapper.FailurePanic{Message:"Apr 22 13:53:47.191: Unable to read wheezy_hosts@dns-querier-2 from pod dns-601/dns-test-f14c904e-be15-43a0-8264-3cfb4ebc073c: Get \"https://172.18.0.3:6443/api/v1/namespaces/dns-601/pods/dns-test-f14c904e-be15-43a0-8264-3cfb4ebc073c/proxy/results/wheezy_hosts@dns-querier-2\": context deadline exceeded", Filename:"/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go", Line:220, FullStackTrace:"k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f205468fb78, 0x0})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7a11748, 0xc000056080}, 0xc005293ab0)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7a11748, 0xc000056080}, 0xf0, 0x2ce3745, 0x68)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7a11748, 0xc000056080}, 0x4a, 0xc005293b40, 0x245df47)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a\nk8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78f7e00, 0xc00005c880, 0xc005293b88)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50\nk8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc004c52d80, 0x4, 0x4}, {0x7117e94, 0x7}, 0xc004ba5000, {0x7b442b0, 0xc004d30180}, 0x0, {0x0, ...})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5\nk8s.io/kubernetes/test/e2e/network.assertFilesExist(...)\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441\nk8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00085e9a0, 0xc004ba5000, {0xc004c52d80, 0x4, 0x4})\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470\nk8s.io/kubernetes/test/e2e/network.glob..func2.7()\n\t/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:279 +0x8b4\nk8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697\nk8s.io/kubernetes/test/e2e.TestE2E(0x2456919)\n\t_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19\ntesting.tRunner(0xc000ada680, 0x73a1f18)\n\t/usr/local/go/src/testing/testing.go:1259 +0x102\ncreated by testing.(*T).Run\n\t/usr/local/go/src/testing/testing.go:1306 +0x35a"} ( Your test failed. Ginkgo panics to prevent subsequent assertions from running. Normally Ginkgo rescues this panic so you shouldn't see it. But, if you make an assertion in a goroutine, Ginkgo can't capture the panic. To circumvent this, you should call defer GinkgoRecover() at the top of the goroutine that caused this panic. ) goroutine 112 [running]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic({0x6c7d0a0, 0xc004d38480}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0x7d k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0xc00007c290}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x75 panic({0x6c7d0a0, 0xc004d38480}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:63 +0x73 panic({0x6311960, 0x78ee270}) /usr/local/go/src/runtime/panic.go:1038 +0x215 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.Fail({0xc002eaa280, 0x12d}, {0xc005293548, 0x0, 0x40}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:260 +0xdd k8s.io/kubernetes/test/e2e/framework/ginkgowrapper.Fail({0xc002eaa280, 0x12d}, {0xc005293628, 0x710f04a, 0xc005293650}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/ginkgowrapper/wrapper.go:67 +0x1a7 k8s.io/kubernetes/test/e2e/framework.Failf({0x71bffaf, 0x2d}, {0xc005293898, 0x0, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/log.go:51 +0x131 k8s.io/kubernetes/test/e2e/network.assertFilesContain.func1() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:464 +0x889 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ConditionFunc.WithContext.func1({0x7f205468fb78, 0x0}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 +0x1b k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.runConditionWithCrashProtectionWithContext({0x7a11748, 0xc000056080}, 0xc005293ab0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:233 +0x7c k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poll({0x7a11748, 0xc000056080}, 0xf0, 0x2ce3745, 0x68) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:580 +0x38 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediateWithContext({0x7a11748, 0xc000056080}, 0x4a, 0xc005293b40, 0x245df47) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:526 +0x4a k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x78f7e00, 0xc00005c880, 0xc005293b88) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:512 +0x50 k8s.io/kubernetes/test/e2e/network.assertFilesContain({0xc004c52d80, 0x4, 0x4}, {0x7117e94, 0x7}, 0xc004ba5000, {0x7b442b0, 0xc004d30180}, 0x0, {0x0, ...}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:447 +0x1c5 k8s.io/kubernetes/test/e2e/network.assertFilesExist(...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:441 k8s.io/kubernetes/test/e2e/network.validateDNSResults(0xc00085e9a0, 0xc004ba5000, {0xc004c52d80, 0x4, 0x4}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:504 +0x470 k8s.io/kubernetes/test/e2e/network.glob..func2.7() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns.go:279 +0x8b4 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000846000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0xba k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc0052955c8) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0x125 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x7b k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc002cfe000, 0xc005295990, {0x78f7e00, 0xc00005c880}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x2a9 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc002cfe000, {0x78f7e00, 0xc00005c880}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xe7 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc0045c0000, 0xc002cfe000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0xe5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc0045c0000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x1a5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc0045c0000) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0xc5 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc00019a070, {0x7f20547bd570, 0xc000ada680}, {0x714ecb8, 0x40}, {0xc000ca6120, 0x3, 0x3}, {0x7a69ad8, 0xc00005c880}, ...) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:79 +0x4d2 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters({0x78fe760, 0xc000ada680}, {0x714ecb8, 0x14}, {0xc000ca51c0, 0x3, 0x6}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:219 +0x185 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters({0x78fe760, 0xc000ada680}, {0x714ecb8, 0x14}, {0xc000c39be0, 0x2, 0x2}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:207 +0xf9 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2456919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000ada680, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:53:47.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-601" for this suite. �[91m�[1m• Failure [308.129 seconds]�[0m [sig-network] DNS �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23�[0m �[91m�[1mshould provide DNS for pods for Hostname [LinuxOnly] [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 13:53:47.191: Unable to read wheezy_hosts@dns-querier-2 from pod dns-601/dns-test-f14c904e-be15-43a0-8264-3cfb4ebc073c: Get "https://172.18.0.3:6443/api/v1/namespaces/dns-601/pods/dns-test-f14c904e-be15-43a0-8264-3cfb4ebc073c/proxy/results/wheezy_hosts@dns-querier-2": context deadline exceeded�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:220 �[90m------------------------------�[0m {"msg":"FAILED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":28,"skipped":673,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:53:47.270: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename dns �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a test headless service �[1mSTEP�[0m: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-336.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-336.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-336.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-336.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;sleep 1; done �[1mSTEP�[0m: creating a pod to probe DNS �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: retrieving the pod �[1mSTEP�[0m: looking for the results for each expected name from probers Apr 22 13:53:57.334: INFO: DNS probes using dns-336/dns-test-7a8d1bfd-e224-4f6c-816f-d4f756681ad6 succeeded �[1mSTEP�[0m: deleting the pod �[1mSTEP�[0m: deleting the test headless service [AfterEach] [sig-network] DNS /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:53:57.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "dns-336" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":-1,"completed":29,"skipped":673,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:53:57.373: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test substitution in container's command Apr 22 13:53:57.399: INFO: Waiting up to 5m0s for pod "var-expansion-4167328a-4d4e-491c-a394-40cc80da3799" in namespace "var-expansion-1100" to be "Succeeded or Failed" Apr 22 13:53:57.404: INFO: Pod "var-expansion-4167328a-4d4e-491c-a394-40cc80da3799": Phase="Pending", Reason="", readiness=false. Elapsed: 5.219375ms Apr 22 13:53:59.411: INFO: Pod "var-expansion-4167328a-4d4e-491c-a394-40cc80da3799": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012110165s Apr 22 13:54:01.415: INFO: Pod "var-expansion-4167328a-4d4e-491c-a394-40cc80da3799": Phase="Pending", Reason="", readiness=false. Elapsed: 4.016094105s Apr 22 13:54:03.420: INFO: Pod "var-expansion-4167328a-4d4e-491c-a394-40cc80da3799": Phase="Pending", Reason="", readiness=false. Elapsed: 6.02037211s Apr 22 13:54:05.423: INFO: Pod "var-expansion-4167328a-4d4e-491c-a394-40cc80da3799": Phase="Pending", Reason="", readiness=false. Elapsed: 8.023815971s Apr 22 13:54:07.429: INFO: Pod "var-expansion-4167328a-4d4e-491c-a394-40cc80da3799": Phase="Pending", Reason="", readiness=false. Elapsed: 10.030244574s Apr 22 13:54:09.433: INFO: Pod "var-expansion-4167328a-4d4e-491c-a394-40cc80da3799": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.03401416s �[1mSTEP�[0m: Saw pod success Apr 22 13:54:09.433: INFO: Pod "var-expansion-4167328a-4d4e-491c-a394-40cc80da3799" satisfied condition "Succeeded or Failed" Apr 22 13:54:09.436: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod var-expansion-4167328a-4d4e-491c-a394-40cc80da3799 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:54:09.462: INFO: Waiting for pod var-expansion-4167328a-4d4e-491c-a394-40cc80da3799 to disappear Apr 22 13:54:09.464: INFO: Pod var-expansion-4167328a-4d4e-491c-a394-40cc80da3799 no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:54:09.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-1100" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":-1,"completed":30,"skipped":676,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]} [BeforeEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:54:09.475: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename watch �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a watch on configmaps with label A �[1mSTEP�[0m: creating a watch on configmaps with label B �[1mSTEP�[0m: creating a watch on configmaps with label A or B �[1mSTEP�[0m: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 22 13:54:09.510: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7276 14ce476d-66df-45dc-bdc0-9d97b05b8a1c 9863 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 13:54:09.510: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7276 14ce476d-66df-45dc-bdc0-9d97b05b8a1c 9863 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A and ensuring the correct watchers observe the notification Apr 22 13:54:09.518: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7276 14ce476d-66df-45dc-bdc0-9d97b05b8a1c 9864 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 13:54:09.519: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7276 14ce476d-66df-45dc-bdc0-9d97b05b8a1c 9864 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: modifying configmap A again and ensuring the correct watchers observe the notification Apr 22 13:54:09.525: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7276 14ce476d-66df-45dc-bdc0-9d97b05b8a1c 9865 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 13:54:09.525: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7276 14ce476d-66df-45dc-bdc0-9d97b05b8a1c 9865 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap A and ensuring the correct watchers observe the notification Apr 22 13:54:09.532: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7276 14ce476d-66df-45dc-bdc0-9d97b05b8a1c 9866 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 13:54:09.532: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-7276 14ce476d-66df-45dc-bdc0-9d97b05b8a1c 9866 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 22 13:54:09.536: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7276 e1b214a9-d416-4dd9-a038-12dbe5733da8 9867 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 13:54:09.536: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7276 e1b214a9-d416-4dd9-a038-12dbe5733da8 9867 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} �[1mSTEP�[0m: deleting configmap B and ensuring the correct watchers observe the notification Apr 22 13:54:19.545: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7276 e1b214a9-d416-4dd9-a038-12dbe5733da8 9895 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Apr 22 13:54:19.545: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-7276 e1b214a9-d416-4dd9-a038-12dbe5733da8 9895 0 2022-04-22 13:54:09 +0000 UTC <nil> <nil> map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2022-04-22 13:54:09 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}} }]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:54:29.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "watch-7276" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":-1,"completed":31,"skipped":676,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:47:24.387: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Apr 22 13:47:24.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 create -f -' Apr 22 13:47:25.190: INFO: stderr: "" Apr 22 13:47:25.190: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Apr 22 13:47:25.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 13:47:25.321: INFO: stderr: "" Apr 22 13:47:25.321: INFO: stdout: "update-demo-nautilus-s5tpx update-demo-nautilus-xhscc " Apr 22 13:47:25.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 get pods update-demo-nautilus-s5tpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:47:25.431: INFO: stderr: "" Apr 22 13:47:25.431: INFO: stdout: "" Apr 22 13:47:25.431: INFO: update-demo-nautilus-s5tpx is created but not running Apr 22 13:47:30.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 13:47:30.505: INFO: stderr: "" Apr 22 13:47:30.505: INFO: stdout: "update-demo-nautilus-s5tpx update-demo-nautilus-xhscc " Apr 22 13:47:30.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 get pods update-demo-nautilus-s5tpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:47:30.570: INFO: stderr: "" Apr 22 13:47:30.570: INFO: stdout: "true" Apr 22 13:47:30.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 get pods update-demo-nautilus-s5tpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 13:47:30.638: INFO: stderr: "" Apr 22 13:47:30.638: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 13:47:30.638: INFO: validating pod update-demo-nautilus-s5tpx Apr 22 13:51:04.745: INFO: update-demo-nautilus-s5tpx is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-s5tpx) Apr 22 13:51:09.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 13:51:09.839: INFO: stderr: "" Apr 22 13:51:09.840: INFO: stdout: "update-demo-nautilus-s5tpx update-demo-nautilus-xhscc " Apr 22 13:51:09.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 get pods update-demo-nautilus-s5tpx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:51:09.915: INFO: stderr: "" Apr 22 13:51:09.915: INFO: stdout: "true" Apr 22 13:51:09.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 get pods update-demo-nautilus-s5tpx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 13:51:09.986: INFO: stderr: "" Apr 22 13:51:09.986: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 13:51:09.986: INFO: validating pod update-demo-nautilus-s5tpx Apr 22 13:54:43.881: INFO: update-demo-nautilus-s5tpx is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-s5tpx) Apr 22 13:54:48.883: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 +0x22f k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0004c4d00, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: using delete to clean up resources Apr 22 13:54:48.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 delete --grace-period=0 --force -f -' Apr 22 13:54:48.967: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 13:54:48.967: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 22 13:54:48.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 get rc,svc -l name=update-demo --no-headers' Apr 22 13:54:49.069: INFO: stderr: "No resources found in kubectl-5064 namespace.\n" Apr 22 13:54:49.069: INFO: stdout: "" Apr 22 13:54:49.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-5064 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 13:54:49.165: INFO: stderr: "" Apr 22 13:54:49.165: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:54:49.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-5064" for this suite. �[91m�[1m• Failure [444.790 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294�[0m �[91m�[1mshould scale a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 13:54:48.883: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:54:29.561: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] deployment should support rollover [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:54:29.596: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 22 13:54:34.603: INFO: Pod name rollover-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Apr 22 13:54:34.603: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 22 13:54:36.608: INFO: Creating deployment "test-rollover-deployment" Apr 22 13:54:36.615: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 22 13:54:38.623: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 22 13:54:38.629: INFO: Ensure that both replica sets have 1 created replica Apr 22 13:54:38.635: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 22 13:54:38.645: INFO: Updating deployment test-rollover-deployment Apr 22 13:54:38.645: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 22 13:54:40.651: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 22 13:54:40.658: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 22 13:54:40.664: INFO: all replica sets need to contain the pod-template-hash label Apr 22 13:54:40.664: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 54, 40, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 13:54:42.673: INFO: all replica sets need to contain the pod-template-hash label Apr 22 13:54:42.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 54, 40, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 13:54:44.673: INFO: all replica sets need to contain the pod-template-hash label Apr 22 13:54:44.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 54, 40, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 13:54:46.672: INFO: all replica sets need to contain the pod-template-hash label Apr 22 13:54:46.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 54, 40, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 13:54:48.671: INFO: all replica sets need to contain the pod-template-hash label Apr 22 13:54:48.672: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 54, 40, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 54, 36, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-668b7f667d\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 13:54:50.672: INFO: Apr 22 13:54:50.672: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 22 13:54:50.682: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-2028 ac896328-115f-4278-a0d1-cb5763c6693d 10064 2 2022-04-22 13:54:36 +0000 UTC <nil> <nil> map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2022-04-22 13:54:38 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:minReadySeconds":{},"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 13:54:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004ae0848 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-22 13:54:36 +0000 UTC,LastTransitionTime:2022-04-22 13:54:36 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-668b7f667d" has successfully progressed.,LastUpdateTime:2022-04-22 13:54:50 +0000 UTC,LastTransitionTime:2022-04-22 13:54:36 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 22 13:54:50.685: INFO: New ReplicaSet "test-rollover-deployment-668b7f667d" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-668b7f667d deployment-2028 a2b1ef5d-a1b0-45ed-aa07-e6f82253bbe9 10051 2 2022-04-22 13:54:38 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668b7f667d] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment ac896328-115f-4278-a0d1-cb5763c6693d 0xc004ae0d07 0xc004ae0d08}] [] [{kube-controller-manager Update apps/v1 2022-04-22 13:54:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac896328-115f-4278-a0d1-cb5763c6693d\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 13:54:50 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 668b7f667d,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668b7f667d] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004ae0db8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 22 13:54:50.685: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 22 13:54:50.685: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-2028 e6102c44-c1e0-4af8-b977-703c555fa3bb 10063 2 2022-04-22 13:54:29 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment ac896328-115f-4278-a0d1-cb5763c6693d 0xc004ae0bd7 0xc004ae0bd8}] [] [{e2e.test Update apps/v1 2022-04-22 13:54:29 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 13:54:50 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac896328-115f-4278-a0d1-cb5763c6693d\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-04-22 13:54:50 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc004ae0c98 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 13:54:50.686: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-784bc44b77 deployment-2028 13b5a0e3-ff51-455b-99bd-07d94c0c2b7d 9990 2 2022-04-22 13:54:36 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment ac896328-115f-4278-a0d1-cb5763c6693d 0xc004ae0e17 0xc004ae0e18}] [] [{kube-controller-manager Update apps/v1 2022-04-22 13:54:36 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ac896328-115f-4278-a0d1-cb5763c6693d\"}":{}}},"f:spec":{"f:minReadySeconds":{},"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"redis-slave\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 13:54:38 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 784bc44b77,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:784bc44b77] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc004ae0ec8 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 13:54:50.690: INFO: Pod "test-rollover-deployment-668b7f667d-ssns2" is available: &Pod{ObjectMeta:{test-rollover-deployment-668b7f667d-ssns2 test-rollover-deployment-668b7f667d- deployment-2028 27f62fd8-1a2d-4afb-ac36-0eabf343cfd7 10005 0 2022-04-22 13:54:38 +0000 UTC <nil> <nil> map[name:rollover-pod pod-template-hash:668b7f667d] map[] [{apps/v1 ReplicaSet test-rollover-deployment-668b7f667d a2b1ef5d-a1b0-45ed-aa07-e6f82253bbe9 0xc00455fc97 0xc00455fc98}] [] [{kube-controller-manager Update v1 2022-04-22 13:54:38 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"a2b1ef5d-a1b0-45ed-aa07-e6f82253bbe9\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-22 13:54:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.26\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-6ddx7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6ddx7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-7gf7we-worker-3u7awl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:54:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:54:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:54:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 13:54:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.3.26,StartTime:2022-04-22 13:54:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 13:54:39 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:containerd://09c6bcea2aa6efb0a4ce965115cc4f16d6c7bc222b4bda39db33677d23e89d97,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:54:50.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-2028" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":-1,"completed":32,"skipped":677,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:54:50.725: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:54:50.757: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2bf9f3f8-3561-4fc7-bd03-c4dca9594183" in namespace "projected-7210" to be "Succeeded or Failed" Apr 22 13:54:50.763: INFO: Pod "downwardapi-volume-2bf9f3f8-3561-4fc7-bd03-c4dca9594183": Phase="Pending", Reason="", readiness=false. Elapsed: 6.213404ms Apr 22 13:54:52.767: INFO: Pod "downwardapi-volume-2bf9f3f8-3561-4fc7-bd03-c4dca9594183": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010366496s Apr 22 13:54:54.771: INFO: Pod "downwardapi-volume-2bf9f3f8-3561-4fc7-bd03-c4dca9594183": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014236911s �[1mSTEP�[0m: Saw pod success Apr 22 13:54:54.771: INFO: Pod "downwardapi-volume-2bf9f3f8-3561-4fc7-bd03-c4dca9594183" satisfied condition "Succeeded or Failed" Apr 22 13:54:54.775: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod downwardapi-volume-2bf9f3f8-3561-4fc7-bd03-c4dca9594183 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:54:54.790: INFO: Waiting for pod downwardapi-volume-2bf9f3f8-3561-4fc7-bd03-c4dca9594183 to disappear Apr 22 13:54:54.795: INFO: Pod downwardapi-volume-2bf9f3f8-3561-4fc7-bd03-c4dca9594183 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:54:54.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-7210" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":33,"skipped":690,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:54:54.836: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should provide podname only [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:54:54.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5badaad5-9d7f-428b-817e-f7bfcc39ad17" in namespace "downward-api-4959" to be "Succeeded or Failed" Apr 22 13:54:54.868: INFO: Pod "downwardapi-volume-5badaad5-9d7f-428b-817e-f7bfcc39ad17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.955558ms Apr 22 13:54:56.872: INFO: Pod "downwardapi-volume-5badaad5-9d7f-428b-817e-f7bfcc39ad17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0068063s Apr 22 13:54:58.877: INFO: Pod "downwardapi-volume-5badaad5-9d7f-428b-817e-f7bfcc39ad17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011327625s �[1mSTEP�[0m: Saw pod success Apr 22 13:54:58.877: INFO: Pod "downwardapi-volume-5badaad5-9d7f-428b-817e-f7bfcc39ad17" satisfied condition "Succeeded or Failed" Apr 22 13:54:58.880: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod downwardapi-volume-5badaad5-9d7f-428b-817e-f7bfcc39ad17 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:54:58.901: INFO: Waiting for pod downwardapi-volume-5badaad5-9d7f-428b-817e-f7bfcc39ad17 to disappear Apr 22 13:54:58.904: INFO: Pod downwardapi-volume-5badaad5-9d7f-428b-817e-f7bfcc39ad17 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:54:58.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-4959" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":-1,"completed":34,"skipped":713,"failed":1,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:52:25.127: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod with failed condition �[1mSTEP�[0m: updating the pod Apr 22 13:54:25.679: INFO: Successfully updated pod "var-expansion-946db698-de97-498a-babb-8fa138d8b96c" �[1mSTEP�[0m: waiting for pod running �[1mSTEP�[0m: deleting the pod gracefully Apr 22 13:54:27.687: INFO: Deleting pod "var-expansion-946db698-de97-498a-babb-8fa138d8b96c" in namespace "var-expansion-9247" Apr 22 13:54:27.693: INFO: Wait up to 5m0s for pod "var-expansion-946db698-de97-498a-babb-8fa138d8b96c" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:54:59.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-9247" for this suite. �[32m• [SLOW TEST:154.583 seconds]�[0m [sig-node] Variable Expansion �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":-1,"completed":51,"skipped":1226,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:54:59.779: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-088c3ab1-a807-4e09-9a86-cedf7abaa745 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 22 13:54:59.807: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fcc28dc9-c759-4808-901c-7504505ba777" in namespace "projected-3440" to be "Succeeded or Failed" Apr 22 13:54:59.812: INFO: Pod "pod-projected-secrets-fcc28dc9-c759-4808-901c-7504505ba777": Phase="Pending", Reason="", readiness=false. Elapsed: 4.135182ms Apr 22 13:55:01.816: INFO: Pod "pod-projected-secrets-fcc28dc9-c759-4808-901c-7504505ba777": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008953675s Apr 22 13:55:03.821: INFO: Pod "pod-projected-secrets-fcc28dc9-c759-4808-901c-7504505ba777": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013271776s �[1mSTEP�[0m: Saw pod success Apr 22 13:55:03.821: INFO: Pod "pod-projected-secrets-fcc28dc9-c759-4808-901c-7504505ba777" satisfied condition "Succeeded or Failed" Apr 22 13:55:03.824: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-kmb2d pod pod-projected-secrets-fcc28dc9-c759-4808-901c-7504505ba777 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:55:03.850: INFO: Waiting for pod pod-projected-secrets-fcc28dc9-c759-4808-901c-7504505ba777 to disappear Apr 22 13:55:03.853: INFO: Pod pod-projected-secrets-fcc28dc9-c759-4808-901c-7504505ba777 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:03.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3440" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":52,"skipped":1279,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:03.895: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-31363c55-1fcd-49c6-b495-af816e32ca24 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 13:55:03.929: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8746537e-b568-4809-87d0-d7233ddf5a5a" in namespace "projected-5008" to be "Succeeded or Failed" Apr 22 13:55:03.932: INFO: Pod "pod-projected-configmaps-8746537e-b568-4809-87d0-d7233ddf5a5a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.382237ms Apr 22 13:55:05.937: INFO: Pod "pod-projected-configmaps-8746537e-b568-4809-87d0-d7233ddf5a5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00789552s Apr 22 13:55:07.942: INFO: Pod "pod-projected-configmaps-8746537e-b568-4809-87d0-d7233ddf5a5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012844709s �[1mSTEP�[0m: Saw pod success Apr 22 13:55:07.942: INFO: Pod "pod-projected-configmaps-8746537e-b568-4809-87d0-d7233ddf5a5a" satisfied condition "Succeeded or Failed" Apr 22 13:55:07.945: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-projected-configmaps-8746537e-b568-4809-87d0-d7233ddf5a5a container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:55:07.968: INFO: Waiting for pod pod-projected-configmaps-8746537e-b568-4809-87d0-d7233ddf5a5a to disappear Apr 22 13:55:07.971: INFO: Pod pod-projected-configmaps-8746537e-b568-4809-87d0-d7233ddf5a5a no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:07.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-5008" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":-1,"completed":53,"skipped":1304,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:07.985: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-map-29e44d3b-f735-4bcb-8f61-fa03fcf12817 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 22 13:55:08.027: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dec401c7-4809-4cb2-b19f-61edd1ded923" in namespace "projected-6509" to be "Succeeded or Failed" Apr 22 13:55:08.030: INFO: Pod "pod-projected-secrets-dec401c7-4809-4cb2-b19f-61edd1ded923": Phase="Pending", Reason="", readiness=false. Elapsed: 3.265174ms Apr 22 13:55:10.035: INFO: Pod "pod-projected-secrets-dec401c7-4809-4cb2-b19f-61edd1ded923": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008472939s Apr 22 13:55:12.048: INFO: Pod "pod-projected-secrets-dec401c7-4809-4cb2-b19f-61edd1ded923": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020921716s �[1mSTEP�[0m: Saw pod success Apr 22 13:55:12.048: INFO: Pod "pod-projected-secrets-dec401c7-4809-4cb2-b19f-61edd1ded923" satisfied condition "Succeeded or Failed" Apr 22 13:55:12.051: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-kmb2d pod pod-projected-secrets-dec401c7-4809-4cb2-b19f-61edd1ded923 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:55:12.067: INFO: Waiting for pod pod-projected-secrets-dec401c7-4809-4cb2-b19f-61edd1ded923 to disappear Apr 22 13:55:12.070: INFO: Pod pod-projected-secrets-dec401c7-4809-4cb2-b19f-61edd1ded923 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:12.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-6509" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":1306,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:12.092: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-projected-f64c �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 22 13:55:12.148: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-f64c" in namespace "subpath-4293" to be "Succeeded or Failed" Apr 22 13:55:12.154: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.488913ms Apr 22 13:55:14.159: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Running", Reason="", readiness=true. Elapsed: 2.010695348s Apr 22 13:55:16.163: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Running", Reason="", readiness=true. Elapsed: 4.015266789s Apr 22 13:55:18.167: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Running", Reason="", readiness=true. Elapsed: 6.019610196s Apr 22 13:55:20.172: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Running", Reason="", readiness=true. Elapsed: 8.023949674s Apr 22 13:55:22.176: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Running", Reason="", readiness=true. Elapsed: 10.028645532s Apr 22 13:55:24.181: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Running", Reason="", readiness=true. Elapsed: 12.033549625s Apr 22 13:55:26.186: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Running", Reason="", readiness=true. Elapsed: 14.038490044s Apr 22 13:55:28.192: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Running", Reason="", readiness=true. Elapsed: 16.043939715s Apr 22 13:55:30.195: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Running", Reason="", readiness=true. Elapsed: 18.047620407s Apr 22 13:55:32.200: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Running", Reason="", readiness=true. Elapsed: 20.052264346s Apr 22 13:55:34.207: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Running", Reason="", readiness=false. Elapsed: 22.059358353s Apr 22 13:55:36.212: INFO: Pod "pod-subpath-test-projected-f64c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.063711346s �[1mSTEP�[0m: Saw pod success Apr 22 13:55:36.212: INFO: Pod "pod-subpath-test-projected-f64c" satisfied condition "Succeeded or Failed" Apr 22 13:55:36.215: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-kmb2d pod pod-subpath-test-projected-f64c container test-container-subpath-projected-f64c: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:55:36.232: INFO: Waiting for pod pod-subpath-test-projected-f64c to disappear Apr 22 13:55:36.235: INFO: Pod pod-subpath-test-projected-f64c no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-projected-f64c Apr 22 13:55:36.235: INFO: Deleting pod "pod-subpath-test-projected-f64c" in namespace "subpath-4293" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:36.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-4293" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":55,"skipped":1312,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:36.302: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:55:36.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8f4b3460-1a63-43c7-a14c-bc0ac54adb95" in namespace "projected-3689" to be "Succeeded or Failed" Apr 22 13:55:36.345: INFO: Pod "downwardapi-volume-8f4b3460-1a63-43c7-a14c-bc0ac54adb95": Phase="Pending", Reason="", readiness=false. Elapsed: 3.702927ms Apr 22 13:55:38.350: INFO: Pod "downwardapi-volume-8f4b3460-1a63-43c7-a14c-bc0ac54adb95": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0081264s Apr 22 13:55:40.354: INFO: Pod "downwardapi-volume-8f4b3460-1a63-43c7-a14c-bc0ac54adb95": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012448003s �[1mSTEP�[0m: Saw pod success Apr 22 13:55:40.354: INFO: Pod "downwardapi-volume-8f4b3460-1a63-43c7-a14c-bc0ac54adb95" satisfied condition "Succeeded or Failed" Apr 22 13:55:40.357: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod downwardapi-volume-8f4b3460-1a63-43c7-a14c-bc0ac54adb95 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:55:40.372: INFO: Waiting for pod downwardapi-volume-8f4b3460-1a63-43c7-a14c-bc0ac54adb95 to disappear Apr 22 13:55:40.377: INFO: Pod downwardapi-volume-8f4b3460-1a63-43c7-a14c-bc0ac54adb95 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:40.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-3689" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":1344,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:40.401: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-runtime �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the container �[1mSTEP�[0m: wait for the container to reach Succeeded �[1mSTEP�[0m: get the container status �[1mSTEP�[0m: the container should be terminated �[1mSTEP�[0m: the termination message should be set Apr 22 13:55:44.459: INFO: Expected: &{OK} to match Container's Termination Message: OK -- �[1mSTEP�[0m: delete the container [AfterEach] [sig-node] Container Runtime /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:44.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-runtime-9488" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [Excluded:WindowsDocker] [NodeConformance] [Conformance]","total":-1,"completed":57,"skipped":1354,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:44.531: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:55:46.561: INFO: Deleting pod "var-expansion-849b9301-9510-4916-8d10-00e1a70af406" in namespace "var-expansion-2643" Apr 22 13:55:46.566: INFO: Wait up to 5m0s for pod "var-expansion-849b9301-9510-4916-8d10-00e1a70af406" to be fully deleted [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:48.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-2643" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":-1,"completed":58,"skipped":1389,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:54:58.938: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:54:59.496: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:55:02.520: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API Apr 22 13:55:12.537: INFO: Waiting for webhook configuration to be ready... Apr 22 13:55:22.648: INFO: Waiting for webhook configuration to be ready... Apr 22 13:55:32.750: INFO: Waiting for webhook configuration to be ready... Apr 22 13:55:42.849: INFO: Waiting for webhook configuration to be ready... Apr 22 13:55:52.859: INFO: Waiting for webhook configuration to be ready... Apr 22 13:55:52.859: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerFailClosedWebhook(0xc000a16840, {0xc003660d20, 0xc}, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1253 +0x56a k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.7() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:237 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2456919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000ada680, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:52.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9362" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9362-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.008 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould unconditionally reject operations on fail closed webhook [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 13:55:52.859: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc0002482b0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1253 �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:48.602: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication Apr 22 13:55:49.116: INFO: role binding webhook-auth-reader already exists �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:55:49.128: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:55:52.148: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:55:52.152: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-8381-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource while v1 is storage version �[1mSTEP�[0m: Patching Custom Resource Definition to set v2 as storage �[1mSTEP�[0m: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:55.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-4676" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-4676-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":-1,"completed":59,"skipped":1396,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":34,"skipped":731,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:52.948: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:55:53.374: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:55:56.427: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API �[1mSTEP�[0m: create a namespace for the webhook �[1mSTEP�[0m: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:56.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6353" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6353-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":-1,"completed":35,"skipped":731,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:56.603: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check is all data is printed [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 13:55:56.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9975 version' Apr 22 13:55:56.693: INFO: stderr: "" Apr 22 13:55:56.694: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.6\", GitCommit:\"ad3338546da947756e8a88aa6822e9c11e7eac22\", GitTreeState:\"clean\", BuildDate:\"2022-04-14T08:49:13Z\", GoVersion:\"go1.17.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"23\", GitVersion:\"v1.23.6\", GitCommit:\"d34db33f\", GitTreeState:\"clean\", BuildDate:\"2022-04-21T19:32:12Z\", GoVersion:\"go1.17.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:56.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9975" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":-1,"completed":36,"skipped":763,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:56.741: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name secret-emptykey-test-e570ce78-581d-4708-9fcd-45b162a78b60 [AfterEach] [sig-node] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:55:56.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-6920" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":-1,"completed":37,"skipped":788,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:55.438: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Apr 22 13:55:55.493: INFO: The status of Pod pod-update-eaa52855-0b73-4919-9ebe-63617d6905a0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:55:57.499: INFO: The status of Pod pod-update-eaa52855-0b73-4919-9ebe-63617d6905a0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:55:59.519: INFO: The status of Pod pod-update-eaa52855-0b73-4919-9ebe-63617d6905a0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:56:01.533: INFO: The status of Pod pod-update-eaa52855-0b73-4919-9ebe-63617d6905a0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:56:03.507: INFO: The status of Pod pod-update-eaa52855-0b73-4919-9ebe-63617d6905a0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:56:05.510: INFO: The status of Pod pod-update-eaa52855-0b73-4919-9ebe-63617d6905a0 is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Apr 22 13:56:06.049: INFO: Successfully updated pod "pod-update-eaa52855-0b73-4919-9ebe-63617d6905a0" �[1mSTEP�[0m: verifying the updated pod is in kubernetes Apr 22 13:56:06.068: INFO: Pod update OK [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:56:06.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-8786" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":-1,"completed":60,"skipped":1419,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:55:56.792: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename gc �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the rc �[1mSTEP�[0m: delete the rc �[1mSTEP�[0m: wait for the rc to be deleted Apr 22 13:56:03.096: INFO: 80 pods remaining Apr 22 13:56:03.096: INFO: 80 pods has nil DeletionTimestamp Apr 22 13:56:03.096: INFO: Apr 22 13:56:04.043: INFO: 71 pods remaining Apr 22 13:56:04.043: INFO: 69 pods has nil DeletionTimestamp Apr 22 13:56:04.043: INFO: Apr 22 13:56:04.969: INFO: 60 pods remaining Apr 22 13:56:04.969: INFO: 60 pods has nil DeletionTimestamp Apr 22 13:56:04.969: INFO: Apr 22 13:56:05.931: INFO: 40 pods remaining Apr 22 13:56:05.931: INFO: 40 pods has nil DeletionTimestamp Apr 22 13:56:05.931: INFO: Apr 22 13:56:06.947: INFO: 31 pods remaining Apr 22 13:56:06.947: INFO: 31 pods has nil DeletionTimestamp Apr 22 13:56:06.947: INFO: Apr 22 13:56:07.932: INFO: 20 pods remaining Apr 22 13:56:07.933: INFO: 20 pods has nil DeletionTimestamp Apr 22 13:56:07.933: INFO: �[1mSTEP�[0m: Gathering metrics Apr 22 13:56:08.989: INFO: The status of Pod kube-controller-manager-k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr is Running (Ready = true) Apr 22 13:56:09.243: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:56:09.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "gc-3130" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":-1,"completed":38,"skipped":792,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:52:09.848: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-probe �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/container_probe.go:56 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod liveness-43483aab-1d22-4c44-86c1-6901f5cb32c0 in namespace container-probe-3928 Apr 22 13:52:11.882: INFO: Started pod liveness-43483aab-1d22-4c44-86c1-6901f5cb32c0 in namespace container-probe-3928 �[1mSTEP�[0m: checking the pod's current state and verifying that restartCount is present Apr 22 13:52:11.885: INFO: Initial restart count of pod liveness-43483aab-1d22-4c44-86c1-6901f5cb32c0 is 0 �[1mSTEP�[0m: deleting the pod [AfterEach] [sig-node] Probing container /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:56:12.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-probe-3928" for this suite. �[32m• [SLOW TEST:242.782 seconds]�[0m [sig-node] Probing container �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23�[0m should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":-1,"completed":41,"skipped":786,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:56:09.275: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:56:09.337: INFO: Waiting up to 5m0s for pod "downwardapi-volume-49b32577-66a9-4dc5-89f7-6c667a230ebc" in namespace "projected-789" to be "Succeeded or Failed" Apr 22 13:56:09.364: INFO: Pod "downwardapi-volume-49b32577-66a9-4dc5-89f7-6c667a230ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 26.654729ms Apr 22 13:56:11.373: INFO: Pod "downwardapi-volume-49b32577-66a9-4dc5-89f7-6c667a230ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035084898s Apr 22 13:56:13.385: INFO: Pod "downwardapi-volume-49b32577-66a9-4dc5-89f7-6c667a230ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047736087s Apr 22 13:56:15.390: INFO: Pod "downwardapi-volume-49b32577-66a9-4dc5-89f7-6c667a230ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052741318s Apr 22 13:56:17.395: INFO: Pod "downwardapi-volume-49b32577-66a9-4dc5-89f7-6c667a230ebc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057840284s Apr 22 13:56:19.400: INFO: Pod "downwardapi-volume-49b32577-66a9-4dc5-89f7-6c667a230ebc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06202941s �[1mSTEP�[0m: Saw pod success Apr 22 13:56:19.400: INFO: Pod "downwardapi-volume-49b32577-66a9-4dc5-89f7-6c667a230ebc" satisfied condition "Succeeded or Failed" Apr 22 13:56:19.403: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod downwardapi-volume-49b32577-66a9-4dc5-89f7-6c667a230ebc container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:56:19.422: INFO: Waiting for pod downwardapi-volume-49b32577-66a9-4dc5-89f7-6c667a230ebc to disappear Apr 22 13:56:19.425: INFO: Pod downwardapi-volume-49b32577-66a9-4dc5-89f7-6c667a230ebc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:56:19.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-789" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":-1,"completed":39,"skipped":797,"failed":2,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:56:12.774: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:56:13.714: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 13:56:15.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 22, 13, 56, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 56, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 56, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 56, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 13:56:17.729: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 22, 13, 56, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 56, 13, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 13, 56, 13, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 13, 56, 13, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:56:20.744: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Setting timeout (1s) shorter than webhook latency (5s) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Request fails when timeout (1s) is shorter than slow webhook latency (5s) �[1mSTEP�[0m: Having no error when timeout is shorter than webhook latency and failure policy is ignore �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is longer than webhook latency �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API �[1mSTEP�[0m: Having no error when timeout is empty (defaulted to 10s in v1) �[1mSTEP�[0m: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:56:32.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-5250" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-5250-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":-1,"completed":42,"skipped":838,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:56:32.970: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name s-test-opt-del-fd10641d-dbe9-46cc-963c-6e6314c51b5f �[1mSTEP�[0m: Creating secret with name s-test-opt-upd-d66aad73-e75d-4c04-985b-f34ef20ac82a �[1mSTEP�[0m: Creating the pod Apr 22 13:56:33.047: INFO: The status of Pod pod-projected-secrets-24adb39b-4e52-4449-81ab-de2d12cffd8c is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:56:35.053: INFO: The status of Pod pod-projected-secrets-24adb39b-4e52-4449-81ab-de2d12cffd8c is Running (Ready = true) �[1mSTEP�[0m: Deleting secret s-test-opt-del-fd10641d-dbe9-46cc-963c-6e6314c51b5f �[1mSTEP�[0m: Updating secret s-test-opt-upd-d66aad73-e75d-4c04-985b-f34ef20ac82a �[1mSTEP�[0m: Creating secret with name s-test-opt-create-3b61e4c4-2bbe-47b7-a4c4-ae8984fac40d �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:56:39.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-2602" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":43,"skipped":872,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:56:39.154: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0777 on node default medium Apr 22 13:56:39.186: INFO: Waiting up to 5m0s for pod "pod-35d43079-8ef1-460e-ba35-595e3463019d" in namespace "emptydir-347" to be "Succeeded or Failed" Apr 22 13:56:39.189: INFO: Pod "pod-35d43079-8ef1-460e-ba35-595e3463019d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.365585ms Apr 22 13:56:41.194: INFO: Pod "pod-35d43079-8ef1-460e-ba35-595e3463019d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008183836s Apr 22 13:56:43.199: INFO: Pod "pod-35d43079-8ef1-460e-ba35-595e3463019d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013299379s �[1mSTEP�[0m: Saw pod success Apr 22 13:56:43.199: INFO: Pod "pod-35d43079-8ef1-460e-ba35-595e3463019d" satisfied condition "Succeeded or Failed" Apr 22 13:56:43.202: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod pod-35d43079-8ef1-460e-ba35-595e3463019d container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:56:43.219: INFO: Waiting for pod pod-35d43079-8ef1-460e-ba35-595e3463019d to disappear Apr 22 13:56:43.224: INFO: Pod pod-35d43079-8ef1-460e-ba35-595e3463019d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:56:43.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-347" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":44,"skipped":886,"failed":0} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:56:06.157: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename statefulset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:94 [BeforeEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:109 �[1mSTEP�[0m: Creating service test in namespace statefulset-8952 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating stateful set ss in namespace statefulset-8952 �[1mSTEP�[0m: Waiting until all stateful set ss replicas will be running in namespace statefulset-8952 Apr 22 13:56:06.267: INFO: Found 0 stateful pods, waiting for 1 Apr 22 13:56:16.272: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Apr 22 13:56:26.271: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 22 13:56:26.275: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8952 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 13:56:26.438: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 13:56:26.438: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 13:56:26.438: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 13:56:26.442: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 22 13:56:36.446: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 22 13:56:36.446: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 13:56:36.460: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 13:56:36.460: INFO: ss-0 k8s-upgrade-and-conformance-7gf7we-worker-wwgoid Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:06 +0000 UTC }] Apr 22 13:56:36.460: INFO: Apr 22 13:56:36.460: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 22 13:56:37.467: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996712183s Apr 22 13:56:38.473: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.989287339s Apr 22 13:56:39.478: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984176552s Apr 22 13:56:40.483: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.978107818s Apr 22 13:56:41.488: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.973355591s Apr 22 13:56:42.494: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.968210859s Apr 22 13:56:43.499: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962455534s Apr 22 13:56:44.504: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.95786997s Apr 22 13:56:45.509: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.264461ms �[1mSTEP�[0m: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8952 Apr 22 13:56:46.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8952 exec ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 13:56:46.681: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n" Apr 22 13:56:46.681: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 13:56:46.681: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 13:56:46.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8952 exec ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 13:56:46.845: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Apr 22 13:56:46.845: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 13:56:46.845: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 13:56:46.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8952 exec ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Apr 22 13:56:47.023: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n" Apr 22 13:56:47.023: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Apr 22 13:56:47.023: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Apr 22 13:56:47.027: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 22 13:56:57.032: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 22 13:56:57.032: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 22 13:56:57.032: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true �[1mSTEP�[0m: Scale down will not halt with unhealthy stateful pod Apr 22 13:56:57.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8952 exec ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 13:56:57.194: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 13:56:57.194: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 13:56:57.194: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 13:56:57.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8952 exec ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 13:56:57.349: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 13:56:57.349: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 13:56:57.349: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 13:56:57.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=statefulset-8952 exec ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Apr 22 13:56:57.508: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n" Apr 22 13:56:57.508: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Apr 22 13:56:57.508: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Apr 22 13:56:57.508: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 13:56:57.513: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Apr 22 13:57:07.521: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 22 13:57:07.521: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 22 13:57:07.521: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 22 13:57:07.533: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 13:57:07.533: INFO: ss-0 k8s-upgrade-and-conformance-7gf7we-worker-wwgoid Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:06 +0000 UTC }] Apr 22 13:57:07.533: INFO: ss-1 k8s-upgrade-and-conformance-7gf7we-worker-3u7awl Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:36 +0000 UTC }] Apr 22 13:57:07.533: INFO: ss-2 k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:58 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:36 +0000 UTC }] Apr 22 13:57:07.533: INFO: Apr 22 13:57:07.533: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 22 13:57:08.538: INFO: POD NODE PHASE GRACE CONDITIONS Apr 22 13:57:08.538: INFO: ss-0 k8s-upgrade-and-conformance-7gf7we-worker-wwgoid Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:06 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:06 +0000 UTC }] Apr 22 13:57:08.538: INFO: ss-1 k8s-upgrade-and-conformance-7gf7we-worker-3u7awl Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:57 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-04-22 13:56:36 +0000 UTC }] Apr 22 13:57:08.538: INFO: Apr 22 13:57:08.538: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 22 13:57:09.542: INFO: Verifying statefulset ss doesn't scale past 0 for another 7.990261375s Apr 22 13:57:10.546: INFO: Verifying statefulset ss doesn't scale past 0 for another 6.98636177s Apr 22 13:57:11.550: INFO: Verifying statefulset ss doesn't scale past 0 for another 5.981941287s Apr 22 13:57:12.559: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.977864166s Apr 22 13:57:13.563: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.96873627s Apr 22 13:57:14.566: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.965214491s Apr 22 13:57:15.571: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.960508044s Apr 22 13:57:16.575: INFO: Verifying statefulset ss doesn't scale past 0 for another 956.482568ms �[1mSTEP�[0m: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8952 Apr 22 13:57:17.579: INFO: Scaling statefulset ss to 0 Apr 22 13:57:17.588: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] Basic StatefulSet functionality [StatefulSetBasic] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:120 Apr 22 13:57:17.591: INFO: Deleting all statefulset in ns statefulset-8952 Apr 22 13:57:17.593: INFO: Scaling statefulset ss to 0 Apr 22 13:57:17.603: INFO: Waiting for statefulset status.replicas updated to 0 Apr 22 13:57:17.606: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:57:17.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "statefulset-8952" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":-1,"completed":61,"skipped":1438,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:57:17.678: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ExternalName to NodePort [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service externalname-service with the type=ExternalName in namespace services-748 �[1mSTEP�[0m: changing the ExternalName service to type=NodePort �[1mSTEP�[0m: creating replication controller externalname-service in namespace services-748 I0422 13:57:17.720900 21 runners.go:193] Created replication controller with name: externalname-service, namespace: services-748, replica count: 2 I0422 13:57:20.772407 21 runners.go:193] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 13:57:20.772: INFO: Creating new exec pod Apr 22 13:57:23.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-748 exec execpodxgkxd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 externalname-service 80' Apr 22 13:57:24.110: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Apr 22 13:57:24.110: INFO: stdout: "externalname-service-f5gh9" Apr 22 13:57:24.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-748 exec execpodxgkxd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.131.146.214 80' Apr 22 13:57:24.265: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.131.146.214 80\nConnection to 10.131.146.214 80 port [tcp/http] succeeded!\n" Apr 22 13:57:24.265: INFO: stdout: "" Apr 22 13:57:25.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-748 exec execpodxgkxd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.131.146.214 80' Apr 22 13:57:25.450: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.131.146.214 80\nConnection to 10.131.146.214 80 port [tcp/http] succeeded!\n" Apr 22 13:57:25.450: INFO: stdout: "externalname-service-x7zcm" Apr 22 13:57:25.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-748 exec execpodxgkxd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 31289' Apr 22 13:57:25.606: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 31289\nConnection to 172.18.0.6 31289 port [tcp/*] succeeded!\n" Apr 22 13:57:25.607: INFO: stdout: "externalname-service-x7zcm" Apr 22 13:57:25.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-748 exec execpodxgkxd -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 31289' Apr 22 13:57:25.749: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 31289\nConnection to 172.18.0.4 31289 port [tcp/*] succeeded!\n" Apr 22 13:57:25.749: INFO: stdout: "externalname-service-x7zcm" Apr 22 13:57:25.749: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:57:25.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-748" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":-1,"completed":62,"skipped":1480,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:56:43.303: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:56:43.957: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:56:46.981: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API Apr 22 13:56:57.000: INFO: Waiting for webhook configuration to be ready... Apr 22 13:57:07.112: INFO: Waiting for webhook configuration to be ready... Apr 22 13:57:17.216: INFO: Waiting for webhook configuration to be ready... Apr 22 13:57:27.310: INFO: Waiting for webhook configuration to be ready... Apr 22 13:57:37.321: INFO: Waiting for webhook configuration to be ready... Apr 22 13:57:37.321: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00033c2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerMutatingWebhookForPod(0xc0007b2580, {0xc00522f380, 0xc}, 0xc00403a8c0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1033 +0x745 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.9() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:262 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2456919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0000b5860, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:57:37.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9044" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9044-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.068 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould mutate pod and apply defaults after mutation [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 13:57:37.321: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00033c2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1033 �[90m------------------------------�[0m [BeforeEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:57:25.806: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-1752 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Apr 22 13:57:25.826: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Apr 22 13:57:25.945: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:57:27.950: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:57:29.949: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:57:31.950: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:57:33.949: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:57:35.949: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:57:37.950: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:57:39.949: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:57:41.949: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:57:43.948: INFO: The status of Pod netserver-0 is Running (Ready = false) Apr 22 13:57:45.950: INFO: The status of Pod netserver-0 is Running (Ready = true) Apr 22 13:57:45.957: INFO: The status of Pod netserver-1 is Running (Ready = true) Apr 22 13:57:45.962: INFO: The status of Pod netserver-2 is Running (Ready = false) Apr 22 13:57:47.966: INFO: The status of Pod netserver-2 is Running (Ready = true) Apr 22 13:57:47.972: INFO: The status of Pod netserver-3 is Running (Ready = true) �[1mSTEP�[0m: Creating test pods Apr 22 13:57:49.989: INFO: Setting MaxTries for pod polling to 46 for networking test based on endpoint count 4 Apr 22 13:57:49.989: INFO: Breadth first check of 192.168.2.106 on host 172.18.0.7... Apr 22 13:57:49.992: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.107:9080/dial?request=hostname&protocol=udp&host=192.168.2.106&port=8081&tries=1'] Namespace:pod-network-test-1752 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 13:57:49.992: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:57:49.993: INFO: ExecWithOptions: Clientset creation Apr 22 13:57:49.993: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1752/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.107%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.2.106%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 22 13:57:50.071: INFO: Waiting for responses: map[] Apr 22 13:57:50.071: INFO: reached 192.168.2.106 after 0/1 tries Apr 22 13:57:50.071: INFO: Breadth first check of 192.168.0.54 on host 172.18.0.4... Apr 22 13:57:50.074: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.107:9080/dial?request=hostname&protocol=udp&host=192.168.0.54&port=8081&tries=1'] Namespace:pod-network-test-1752 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 13:57:50.074: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:57:50.075: INFO: ExecWithOptions: Clientset creation Apr 22 13:57:50.075: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1752/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.107%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.0.54%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 22 13:57:50.163: INFO: Waiting for responses: map[] Apr 22 13:57:50.163: INFO: reached 192.168.0.54 after 0/1 tries Apr 22 13:57:50.163: INFO: Breadth first check of 192.168.3.61 on host 172.18.0.6... Apr 22 13:57:50.166: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.107:9080/dial?request=hostname&protocol=udp&host=192.168.3.61&port=8081&tries=1'] Namespace:pod-network-test-1752 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 13:57:50.166: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:57:50.167: INFO: ExecWithOptions: Clientset creation Apr 22 13:57:50.167: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1752/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.107%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.3.61%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 22 13:57:50.254: INFO: Waiting for responses: map[] Apr 22 13:57:50.254: INFO: reached 192.168.3.61 after 0/1 tries Apr 22 13:57:50.254: INFO: Breadth first check of 192.168.4.55 on host 172.18.0.5... Apr 22 13:57:50.259: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://192.168.2.107:9080/dial?request=hostname&protocol=udp&host=192.168.4.55&port=8081&tries=1'] Namespace:pod-network-test-1752 PodName:test-container-pod ContainerName:webserver Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false Quiet:false} Apr 22 13:57:50.259: INFO: >>> kubeConfig: /tmp/kubeconfig Apr 22 13:57:50.260: INFO: ExecWithOptions: Clientset creation Apr 22 13:57:50.260: INFO: ExecWithOptions: execute(POST https://172.18.0.3:6443/api/v1/namespaces/pod-network-test-1752/pods/test-container-pod/exec?command=%2Fbin%2Fsh&command=-c&command=curl+-g+-q+-s+%27http%3A%2F%2F192.168.2.107%3A9080%2Fdial%3Frequest%3Dhostname%26protocol%3Dudp%26host%3D192.168.4.55%26port%3D8081%26tries%3D1%27&container=webserver&container=webserver&stderr=true&stdout=true %!s(MISSING)) Apr 22 13:57:50.365: INFO: Waiting for responses: map[] Apr 22 13:57:50.365: INFO: reached 192.168.4.55 after 0/1 tries Apr 22 13:57:50.365: INFO: Going to retry 0 out of 4 pods.... [AfterEach] [sig-network] Networking /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:57:50.365: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-1752" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":-1,"completed":63,"skipped":1486,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:57:50.384: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test emptydir 0644 on node default medium Apr 22 13:57:50.417: INFO: Waiting up to 5m0s for pod "pod-fdafc560-e010-4e54-b86d-9658f1cf63d0" in namespace "emptydir-5141" to be "Succeeded or Failed" Apr 22 13:57:50.420: INFO: Pod "pod-fdafc560-e010-4e54-b86d-9658f1cf63d0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.061924ms Apr 22 13:57:52.424: INFO: Pod "pod-fdafc560-e010-4e54-b86d-9658f1cf63d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0071301s Apr 22 13:57:54.428: INFO: Pod "pod-fdafc560-e010-4e54-b86d-9658f1cf63d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011113165s �[1mSTEP�[0m: Saw pod success Apr 22 13:57:54.428: INFO: Pod "pod-fdafc560-e010-4e54-b86d-9658f1cf63d0" satisfied condition "Succeeded or Failed" Apr 22 13:57:54.431: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-fdafc560-e010-4e54-b86d-9658f1cf63d0 container test-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:57:54.445: INFO: Waiting for pod pod-fdafc560-e010-4e54-b86d-9658f1cf63d0 to disappear Apr 22 13:57:54.447: INFO: Pod pod-fdafc560-e010-4e54-b86d-9658f1cf63d0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:57:54.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-5141" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1492,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:57:54.533: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 13:57:54.556: INFO: Waiting up to 5m0s for pod "downwardapi-volume-226125c4-ce7f-4f6b-bd26-713e1d8619e4" in namespace "projected-1414" to be "Succeeded or Failed" Apr 22 13:57:54.560: INFO: Pod "downwardapi-volume-226125c4-ce7f-4f6b-bd26-713e1d8619e4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.414436ms Apr 22 13:57:56.564: INFO: Pod "downwardapi-volume-226125c4-ce7f-4f6b-bd26-713e1d8619e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007213392s Apr 22 13:57:58.568: INFO: Pod "downwardapi-volume-226125c4-ce7f-4f6b-bd26-713e1d8619e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011387247s �[1mSTEP�[0m: Saw pod success Apr 22 13:57:58.568: INFO: Pod "downwardapi-volume-226125c4-ce7f-4f6b-bd26-713e1d8619e4" satisfied condition "Succeeded or Failed" Apr 22 13:57:58.571: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod downwardapi-volume-226125c4-ce7f-4f6b-bd26-713e1d8619e4 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:57:58.595: INFO: Waiting for pod downwardapi-volume-226125c4-ce7f-4f6b-bd26-713e1d8619e4 to disappear Apr 22 13:57:58.598: INFO: Pod downwardapi-volume-226125c4-ce7f-4f6b-bd26-713e1d8619e4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:57:58.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1414" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":-1,"completed":65,"skipped":1550,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:57:58.614: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test override all Apr 22 13:57:58.640: INFO: Waiting up to 5m0s for pod "client-containers-fdcb348a-d335-40c9-8415-c54f9733a4e4" in namespace "containers-570" to be "Succeeded or Failed" Apr 22 13:57:58.642: INFO: Pod "client-containers-fdcb348a-d335-40c9-8415-c54f9733a4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067216ms Apr 22 13:58:00.646: INFO: Pod "client-containers-fdcb348a-d335-40c9-8415-c54f9733a4e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005363483s Apr 22 13:58:02.650: INFO: Pod "client-containers-fdcb348a-d335-40c9-8415-c54f9733a4e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009887525s �[1mSTEP�[0m: Saw pod success Apr 22 13:58:02.650: INFO: Pod "client-containers-fdcb348a-d335-40c9-8415-c54f9733a4e4" satisfied condition "Succeeded or Failed" Apr 22 13:58:02.653: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod client-containers-fdcb348a-d335-40c9-8415-c54f9733a4e4 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:58:02.667: INFO: Waiting for pod client-containers-fdcb348a-d335-40c9-8415-c54f9733a4e4 to disappear Apr 22 13:58:02.670: INFO: Pod client-containers-fdcb348a-d335-40c9-8415-c54f9733a4e4 no longer exists [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:58:02.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-570" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":1554,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:58:02.739: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name projected-configmap-test-volume-map-f76a7daa-83d0-4ef6-ab14-b7ece43be4e9 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 13:58:02.773: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-66f69030-37d6-42c7-9034-5d1193af19d5" in namespace "projected-1257" to be "Succeeded or Failed" Apr 22 13:58:02.779: INFO: Pod "pod-projected-configmaps-66f69030-37d6-42c7-9034-5d1193af19d5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.689496ms Apr 22 13:58:04.784: INFO: Pod "pod-projected-configmaps-66f69030-37d6-42c7-9034-5d1193af19d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010099803s Apr 22 13:58:06.788: INFO: Pod "pod-projected-configmaps-66f69030-37d6-42c7-9034-5d1193af19d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0138934s �[1mSTEP�[0m: Saw pod success Apr 22 13:58:06.788: INFO: Pod "pod-projected-configmaps-66f69030-37d6-42c7-9034-5d1193af19d5" satisfied condition "Succeeded or Failed" Apr 22 13:58:06.791: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-wwgoid pod pod-projected-configmaps-66f69030-37d6-42c7-9034-5d1193af19d5 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:58:06.805: INFO: Waiting for pod pod-projected-configmaps-66f69030-37d6-42c7-9034-5d1193af19d5 to disappear Apr 22 13:58:06.807: INFO: Pod pod-projected-configmaps-66f69030-37d6-42c7-9034-5d1193af19d5 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:58:06.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-1257" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":67,"skipped":1594,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:58:06.855: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name cm-test-opt-del-bc643b3d-f929-4bbb-8812-285560905eda �[1mSTEP�[0m: Creating configMap with name cm-test-opt-upd-ef65ee93-623f-480c-8537-ea19469bdc9a �[1mSTEP�[0m: Creating the pod Apr 22 13:58:06.895: INFO: The status of Pod pod-configmaps-67075d6b-c898-4821-a86d-7f2b52983b43 is Pending, waiting for it to be Running (with Ready = true) Apr 22 13:58:08.900: INFO: The status of Pod pod-configmaps-67075d6b-c898-4821-a86d-7f2b52983b43 is Running (Ready = true) �[1mSTEP�[0m: Deleting configmap cm-test-opt-del-bc643b3d-f929-4bbb-8812-285560905eda �[1mSTEP�[0m: Updating configmap cm-test-opt-upd-ef65ee93-623f-480c-8537-ea19469bdc9a �[1mSTEP�[0m: Creating configMap with name cm-test-opt-create-9ac13525-c7c8-4b43-91dd-0f65b082105b �[1mSTEP�[0m: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:58:10.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-168" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":68,"skipped":1626,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":44,"skipped":934,"failed":1,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:57:37.373: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:57:37.963: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:57:40.984: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API Apr 22 13:57:51.005: INFO: Waiting for webhook configuration to be ready... Apr 22 13:58:01.116: INFO: Waiting for webhook configuration to be ready... Apr 22 13:58:11.225: INFO: Waiting for webhook configuration to be ready... Apr 22 13:58:21.315: INFO: Waiting for webhook configuration to be ready... Apr 22 13:58:31.326: INFO: Waiting for webhook configuration to be ready... Apr 22 13:58:31.326: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00033c2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerMutatingWebhookForPod(0xc0007b2580, {0xc0045ae7e0, 0xc}, 0xc004678460, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1033 +0x745 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.9() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:262 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2456919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0000b5860, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:58:31.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9013" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9013-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [54.015 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould mutate pod and apply defaults after mutation [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 13:58:31.326: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00033c2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1033 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":44,"skipped":934,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:58:31.390: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 13:58:31.747: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 13:58:34.766: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating pod webhook via the AdmissionRegistration API Apr 22 13:58:44.784: INFO: Waiting for webhook configuration to be ready... Apr 22 13:58:54.894: INFO: Waiting for webhook configuration to be ready... Apr 22 13:59:04.998: INFO: Waiting for webhook configuration to be ready... Apr 22 13:59:15.095: INFO: Waiting for webhook configuration to be ready... Apr 22 13:59:25.105: INFO: Waiting for webhook configuration to be ready... Apr 22 13:59:25.105: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00033c2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerMutatingWebhookForPod(0xc0007b2580, {0xc003c4d810, 0xb}, 0xc00454a870, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1033 +0x745 k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.9() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:262 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2456919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0000b5860, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:59:25.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-334" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-334-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [53.787 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould mutate pod and apply defaults after mutation [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 13:59:25.105: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00033c2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1033 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":-1,"completed":44,"skipped":934,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:59:25.239: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-downwardapi-l9d7 �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 22 13:59:25.294: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-l9d7" in namespace "subpath-465" to be "Succeeded or Failed" Apr 22 13:59:25.300: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Pending", Reason="", readiness=false. Elapsed: 5.713285ms Apr 22 13:59:27.307: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Running", Reason="", readiness=true. Elapsed: 2.012577222s Apr 22 13:59:29.312: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Running", Reason="", readiness=true. Elapsed: 4.017259823s Apr 22 13:59:31.316: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Running", Reason="", readiness=true. Elapsed: 6.021605524s Apr 22 13:59:33.321: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Running", Reason="", readiness=true. Elapsed: 8.026381603s Apr 22 13:59:35.325: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Running", Reason="", readiness=true. Elapsed: 10.030324s Apr 22 13:59:37.331: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Running", Reason="", readiness=true. Elapsed: 12.036384277s Apr 22 13:59:39.337: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Running", Reason="", readiness=true. Elapsed: 14.042297777s Apr 22 13:59:41.341: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Running", Reason="", readiness=true. Elapsed: 16.046802864s Apr 22 13:59:43.345: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Running", Reason="", readiness=true. Elapsed: 18.050394127s Apr 22 13:59:45.349: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Running", Reason="", readiness=true. Elapsed: 20.054918687s Apr 22 13:59:47.355: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Running", Reason="", readiness=false. Elapsed: 22.060174961s Apr 22 13:59:49.359: INFO: Pod "pod-subpath-test-downwardapi-l9d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.064764252s �[1mSTEP�[0m: Saw pod success Apr 22 13:59:49.359: INFO: Pod "pod-subpath-test-downwardapi-l9d7" satisfied condition "Succeeded or Failed" Apr 22 13:59:49.362: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-subpath-test-downwardapi-l9d7 container test-container-subpath-downwardapi-l9d7: <nil> �[1mSTEP�[0m: delete the pod Apr 22 13:59:49.383: INFO: Waiting for pod pod-subpath-test-downwardapi-l9d7 to disappear Apr 22 13:59:49.386: INFO: Pod pod-subpath-test-downwardapi-l9d7 no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-downwardapi-l9d7 Apr 22 13:59:49.386: INFO: Deleting pod "pod-subpath-test-downwardapi-l9d7" in namespace "subpath-465" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 13:59:49.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-465" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":45,"skipped":966,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":6,"skipped":149,"failed":2,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:54:49.180: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Apr 22 13:54:49.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 create -f -' Apr 22 13:54:50.079: INFO: stderr: "" Apr 22 13:54:50.079: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Apr 22 13:54:50.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 13:54:50.193: INFO: stderr: "" Apr 22 13:54:50.193: INFO: stdout: "update-demo-nautilus-2cgh7 update-demo-nautilus-lrqlb " Apr 22 13:54:50.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-2cgh7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:54:50.283: INFO: stderr: "" Apr 22 13:54:50.283: INFO: stdout: "" Apr 22 13:54:50.283: INFO: update-demo-nautilus-2cgh7 is created but not running Apr 22 13:54:55.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 13:54:55.376: INFO: stderr: "" Apr 22 13:54:55.376: INFO: stdout: "update-demo-nautilus-2cgh7 update-demo-nautilus-lrqlb " Apr 22 13:54:55.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-2cgh7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:54:55.464: INFO: stderr: "" Apr 22 13:54:55.465: INFO: stdout: "true" Apr 22 13:54:55.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-2cgh7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 13:54:55.557: INFO: stderr: "" Apr 22 13:54:55.557: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 13:54:55.557: INFO: validating pod update-demo-nautilus-2cgh7 Apr 22 13:54:55.562: INFO: got data: { "image": "nautilus.jpg" } Apr 22 13:54:55.562: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 13:54:55.562: INFO: update-demo-nautilus-2cgh7 is verified up and running Apr 22 13:54:55.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-lrqlb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:54:55.648: INFO: stderr: "" Apr 22 13:54:55.649: INFO: stdout: "true" Apr 22 13:54:55.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-lrqlb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 13:54:55.732: INFO: stderr: "" Apr 22 13:54:55.732: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 13:54:55.732: INFO: validating pod update-demo-nautilus-lrqlb Apr 22 13:54:55.748: INFO: got data: { "image": "nautilus.jpg" } Apr 22 13:54:55.748: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 13:54:55.748: INFO: update-demo-nautilus-lrqlb is verified up and running �[1mSTEP�[0m: scaling down the replication controller Apr 22 13:54:55.750: INFO: scanned /root for discovery docs: <nil> Apr 22 13:54:55.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 scale rc update-demo-nautilus --replicas=1 --timeout=5m' Apr 22 13:54:56.867: INFO: stderr: "" Apr 22 13:54:56.867: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Apr 22 13:54:56.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 13:54:56.943: INFO: stderr: "" Apr 22 13:54:56.943: INFO: stdout: "update-demo-nautilus-2cgh7 update-demo-nautilus-lrqlb " �[1mSTEP�[0m: Replicas for name=update-demo: expected=1 actual=2 Apr 22 13:55:01.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 13:55:02.026: INFO: stderr: "" Apr 22 13:55:02.026: INFO: stdout: "update-demo-nautilus-2cgh7 " Apr 22 13:55:02.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-2cgh7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:55:02.112: INFO: stderr: "" Apr 22 13:55:02.113: INFO: stdout: "true" Apr 22 13:55:02.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-2cgh7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 13:55:02.197: INFO: stderr: "" Apr 22 13:55:02.197: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 13:55:02.197: INFO: validating pod update-demo-nautilus-2cgh7 Apr 22 13:55:02.201: INFO: got data: { "image": "nautilus.jpg" } Apr 22 13:55:02.201: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 13:55:02.201: INFO: update-demo-nautilus-2cgh7 is verified up and running �[1mSTEP�[0m: scaling up the replication controller Apr 22 13:55:02.202: INFO: scanned /root for discovery docs: <nil> Apr 22 13:55:02.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 scale rc update-demo-nautilus --replicas=2 --timeout=5m' Apr 22 13:55:03.293: INFO: stderr: "" Apr 22 13:55:03.293: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Apr 22 13:55:03.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 13:55:03.370: INFO: stderr: "" Apr 22 13:55:03.370: INFO: stdout: "update-demo-nautilus-2cgh7 update-demo-nautilus-5wnt6 " Apr 22 13:55:03.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-2cgh7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:55:03.444: INFO: stderr: "" Apr 22 13:55:03.444: INFO: stdout: "true" Apr 22 13:55:03.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-2cgh7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 13:55:03.516: INFO: stderr: "" Apr 22 13:55:03.516: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 13:55:03.517: INFO: validating pod update-demo-nautilus-2cgh7 Apr 22 13:55:03.521: INFO: got data: { "image": "nautilus.jpg" } Apr 22 13:55:03.521: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 13:55:03.521: INFO: update-demo-nautilus-2cgh7 is verified up and running Apr 22 13:55:03.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-5wnt6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:55:03.592: INFO: stderr: "" Apr 22 13:55:03.592: INFO: stdout: "true" Apr 22 13:55:03.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-5wnt6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 13:55:03.662: INFO: stderr: "" Apr 22 13:55:03.662: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 13:55:03.662: INFO: validating pod update-demo-nautilus-5wnt6 Apr 22 13:58:37.353: INFO: update-demo-nautilus-5wnt6 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-5wnt6) Apr 22 13:58:42.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 13:58:42.429: INFO: stderr: "" Apr 22 13:58:42.429: INFO: stdout: "update-demo-nautilus-2cgh7 update-demo-nautilus-5wnt6 " Apr 22 13:58:42.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-2cgh7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:58:42.499: INFO: stderr: "" Apr 22 13:58:42.499: INFO: stdout: "true" Apr 22 13:58:42.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-2cgh7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 13:58:42.571: INFO: stderr: "" Apr 22 13:58:42.572: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 13:58:42.572: INFO: validating pod update-demo-nautilus-2cgh7 Apr 22 13:58:42.576: INFO: got data: { "image": "nautilus.jpg" } Apr 22 13:58:42.576: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 22 13:58:42.576: INFO: update-demo-nautilus-2cgh7 is verified up and running Apr 22 13:58:42.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-5wnt6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:58:42.651: INFO: stderr: "" Apr 22 13:58:42.651: INFO: stdout: "true" Apr 22 13:58:42.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods update-demo-nautilus-5wnt6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 13:58:42.717: INFO: stderr: "" Apr 22 13:58:42.717: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 13:58:42.717: INFO: validating pod update-demo-nautilus-5wnt6 Apr 22 14:02:16.489: INFO: update-demo-nautilus-5wnt6 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-5wnt6) Apr 22 14:02:21.490: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:335 +0x505 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0004c4d00, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: using delete to clean up resources Apr 22 14:02:21.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 delete --grace-period=0 --force -f -' Apr 22 14:02:21.573: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 14:02:21.573: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 22 14:02:21.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get rc,svc -l name=update-demo --no-headers' Apr 22 14:02:21.683: INFO: stderr: "No resources found in kubectl-8453 namespace.\n" Apr 22 14:02:21.683: INFO: stdout: "" Apr 22 14:02:21.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8453 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 14:02:21.788: INFO: stderr: "" Apr 22 14:02:21.788: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:02:21.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8453" for this suite. �[91m�[1m• Failure [452.619 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294�[0m �[91m�[1mshould scale a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 14:02:21.490: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:335 �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:56:19.491: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Apr 22 13:56:19.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 create -f -' Apr 22 13:56:20.546: INFO: stderr: "" Apr 22 13:56:20.546: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Apr 22 13:56:20.546: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 13:56:20.645: INFO: stderr: "" Apr 22 13:56:20.646: INFO: stdout: "update-demo-nautilus-5sljg update-demo-nautilus-hqtt2 " Apr 22 13:56:20.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 get pods update-demo-nautilus-5sljg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:56:20.725: INFO: stderr: "" Apr 22 13:56:20.725: INFO: stdout: "" Apr 22 13:56:20.725: INFO: update-demo-nautilus-5sljg is created but not running Apr 22 13:56:25.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 13:56:25.799: INFO: stderr: "" Apr 22 13:56:25.799: INFO: stdout: "update-demo-nautilus-5sljg update-demo-nautilus-hqtt2 " Apr 22 13:56:25.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 get pods update-demo-nautilus-5sljg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 13:56:25.871: INFO: stderr: "" Apr 22 13:56:25.871: INFO: stdout: "true" Apr 22 13:56:25.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 get pods update-demo-nautilus-5sljg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 13:56:25.942: INFO: stderr: "" Apr 22 13:56:25.942: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 13:56:25.942: INFO: validating pod update-demo-nautilus-5sljg Apr 22 13:59:59.273: INFO: update-demo-nautilus-5sljg is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-5sljg) Apr 22 14:00:04.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 14:00:04.345: INFO: stderr: "" Apr 22 14:00:04.345: INFO: stdout: "update-demo-nautilus-5sljg update-demo-nautilus-hqtt2 " Apr 22 14:00:04.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 get pods update-demo-nautilus-5sljg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 14:00:04.418: INFO: stderr: "" Apr 22 14:00:04.418: INFO: stdout: "true" Apr 22 14:00:04.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 get pods update-demo-nautilus-5sljg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 14:00:04.485: INFO: stderr: "" Apr 22 14:00:04.485: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 14:00:04.485: INFO: validating pod update-demo-nautilus-5sljg Apr 22 14:03:38.409: INFO: update-demo-nautilus-5sljg is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-5sljg) Apr 22 14:03:43.410: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.2() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 +0x225 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2456919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc000ada680, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: using delete to clean up resources Apr 22 14:03:43.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 delete --grace-period=0 --force -f -' Apr 22 14:03:43.483: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 14:03:43.483: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 22 14:03:43.483: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 get rc,svc -l name=update-demo --no-headers' Apr 22 14:03:43.580: INFO: stderr: "No resources found in kubectl-9085 namespace.\n" Apr 22 14:03:43.580: INFO: stdout: "" Apr 22 14:03:43.580: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-9085 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 14:03:43.672: INFO: stderr: "" Apr 22 14:03:43.672: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:03:43.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-9085" for this suite. �[91m�[1m• Failure [444.191 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294�[0m �[91m�[1mshould create and stop a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 14:03:43.410: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:314 �[90m------------------------------�[0m [BeforeEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 13:59:49.407: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename cronjob �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a ForbidConcurrent cronjob �[1mSTEP�[0m: Ensuring a job is scheduled �[1mSTEP�[0m: Ensuring exactly one is scheduled �[1mSTEP�[0m: Ensuring exactly one running job exists by listing jobs explicitly �[1mSTEP�[0m: Ensuring no more jobs are scheduled �[1mSTEP�[0m: Removing cronjob [AfterEach] [sig-apps] CronJob /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:05:01.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "cronjob-8915" for this suite. �[32m• [SLOW TEST:312.086 seconds]�[0m [sig-apps] CronJob �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23�[0m should not schedule new jobs when ForbidConcurrent [Slow] [Conformance] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":-1,"completed":46,"skipped":971,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:05:01.518: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward API volume plugin Apr 22 14:05:01.555: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3fac9609-7520-4096-9498-5fb8f9a40d59" in namespace "downward-api-5672" to be "Succeeded or Failed" Apr 22 14:05:01.571: INFO: Pod "downwardapi-volume-3fac9609-7520-4096-9498-5fb8f9a40d59": Phase="Pending", Reason="", readiness=false. Elapsed: 16.249579ms Apr 22 14:05:03.579: INFO: Pod "downwardapi-volume-3fac9609-7520-4096-9498-5fb8f9a40d59": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023830091s Apr 22 14:05:05.585: INFO: Pod "downwardapi-volume-3fac9609-7520-4096-9498-5fb8f9a40d59": Phase="Running", Reason="", readiness=false. Elapsed: 4.029654937s Apr 22 14:05:07.593: INFO: Pod "downwardapi-volume-3fac9609-7520-4096-9498-5fb8f9a40d59": Phase="Running", Reason="", readiness=false. Elapsed: 6.038351507s Apr 22 14:05:09.598: INFO: Pod "downwardapi-volume-3fac9609-7520-4096-9498-5fb8f9a40d59": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.042555814s �[1mSTEP�[0m: Saw pod success Apr 22 14:05:09.598: INFO: Pod "downwardapi-volume-3fac9609-7520-4096-9498-5fb8f9a40d59" satisfied condition "Succeeded or Failed" Apr 22 14:05:09.601: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod downwardapi-volume-3fac9609-7520-4096-9498-5fb8f9a40d59 container client-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 14:05:09.640: INFO: Waiting for pod downwardapi-volume-3fac9609-7520-4096-9498-5fb8f9a40d59 to disappear Apr 22 14:05:09.644: INFO: Pod downwardapi-volume-3fac9609-7520-4096-9498-5fb8f9a40d59 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:05:09.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-5672" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":-1,"completed":47,"skipped":979,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:05:09.669: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 14:05:10.072: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Apr 22 14:05:12.090: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 14:05:14.096: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 14:05:16.097: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 22 14:05:18.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 14, 5, 10, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 14:05:21.109: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the mutating configmap webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:05:21.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8534" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8534-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":-1,"completed":48,"skipped":986,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:05:21.266: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: set up a multi version CRD Apr 22 14:05:21.316: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: mark a version not serverd �[1mSTEP�[0m: check the unserved version gets removed �[1mSTEP�[0m: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:05:37.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-4404" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":-1,"completed":49,"skipped":987,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:05:37.979: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename var-expansion �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test substitution in container's args Apr 22 14:05:38.017: INFO: Waiting up to 5m0s for pod "var-expansion-7f3e956d-28c8-4912-9681-acdcf54ea4cb" in namespace "var-expansion-3355" to be "Succeeded or Failed" Apr 22 14:05:38.024: INFO: Pod "var-expansion-7f3e956d-28c8-4912-9681-acdcf54ea4cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.636774ms Apr 22 14:05:40.028: INFO: Pod "var-expansion-7f3e956d-28c8-4912-9681-acdcf54ea4cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010855055s Apr 22 14:05:42.036: INFO: Pod "var-expansion-7f3e956d-28c8-4912-9681-acdcf54ea4cb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01854062s Apr 22 14:05:44.041: INFO: Pod "var-expansion-7f3e956d-28c8-4912-9681-acdcf54ea4cb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.023421784s Apr 22 14:05:46.048: INFO: Pod "var-expansion-7f3e956d-28c8-4912-9681-acdcf54ea4cb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.030603229s Apr 22 14:05:48.053: INFO: Pod "var-expansion-7f3e956d-28c8-4912-9681-acdcf54ea4cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.035838704s �[1mSTEP�[0m: Saw pod success Apr 22 14:05:48.053: INFO: Pod "var-expansion-7f3e956d-28c8-4912-9681-acdcf54ea4cb" satisfied condition "Succeeded or Failed" Apr 22 14:05:48.059: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod var-expansion-7f3e956d-28c8-4912-9681-acdcf54ea4cb container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 14:05:48.084: INFO: Waiting for pod var-expansion-7f3e956d-28c8-4912-9681-acdcf54ea4cb to disappear Apr 22 14:05:48.087: INFO: Pod var-expansion-7f3e956d-28c8-4912-9681-acdcf54ea4cb no longer exists [AfterEach] [sig-node] Variable Expansion /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:05:48.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "var-expansion-3355" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":-1,"completed":50,"skipped":1149,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:05:48.119: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-upd-a32a7e5f-88a9-4bd9-8f19-7d24e87dbd82 �[1mSTEP�[0m: Creating the pod �[1mSTEP�[0m: Waiting for pod with text data �[1mSTEP�[0m: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:05:50.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-2021" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":-1,"completed":51,"skipped":1164,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:05:50.188: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl can dry-run update Pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Apr 22 14:05:50.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-873 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Apr 22 14:05:50.314: INFO: stderr: "" Apr 22 14:05:50.314: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: replace the image in the pod with server-side dry-run Apr 22 14:05:50.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-873 patch pod e2e-test-httpd-pod -p {"spec":{"containers":[{"name": "e2e-test-httpd-pod","image": "k8s.gcr.io/e2e-test-images/busybox:1.29-2"}]}} --dry-run=server' Apr 22 14:05:51.004: INFO: stderr: "" Apr 22 14:05:51.004: INFO: stdout: "pod/e2e-test-httpd-pod patched\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Apr 22 14:05:51.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-873 delete pods e2e-test-httpd-pod' Apr 22 14:05:54.056: INFO: stderr: "" Apr 22 14:05:54.056: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:05:54.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-873" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":-1,"completed":52,"skipped":1164,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:05:54.070: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 14:05:54.107: INFO: created pod Apr 22 14:05:54.107: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-6797" to be "Succeeded or Failed" Apr 22 14:05:54.109: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.459267ms Apr 22 14:05:56.114: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007387199s Apr 22 14:05:58.119: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012435389s �[1mSTEP�[0m: Saw pod success Apr 22 14:05:58.119: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed" Apr 22 14:06:28.120: INFO: polling logs Apr 22 14:06:28.126: INFO: Pod logs: 2022/04/22 14:05:54 OK: Got token 2022/04/22 14:05:54 validating with in-cluster discovery 2022/04/22 14:05:54 OK: got issuer https://kubernetes.default.svc.cluster.local 2022/04/22 14:05:54 Full, not-validated claims: openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6797:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1650636954, NotBefore:1650636354, IssuedAt:1650636354, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6797", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"a120e2b1-9cde-4002-8b4f-eafa3b6c1a51"}}} 2022/04/22 14:05:54 OK: Constructed OIDC provider for issuer https://kubernetes.default.svc.cluster.local 2022/04/22 14:05:54 OK: Validated signature on JWT 2022/04/22 14:05:54 OK: Got valid claims from token! 2022/04/22 14:05:54 Full, validated claims: &openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-6797:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1650636954, NotBefore:1650636354, IssuedAt:1650636354, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-6797", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"a120e2b1-9cde-4002-8b4f-eafa3b6c1a51"}}} Apr 22 14:06:28.126: INFO: completed pod [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:06:28.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-6797" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":-1,"completed":53,"skipped":1166,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:06:28.158: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename containers �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Docker Containers /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:06:30.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "containers-3455" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":-1,"completed":54,"skipped":1177,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:06:30.229: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:06:37.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-4288" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":-1,"completed":55,"skipped":1190,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:06:37.299: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename security-context-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 14:06:37.328: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-01776d81-5572-4a3c-ad95-03670fb9513b" in namespace "security-context-test-8582" to be "Succeeded or Failed" Apr 22 14:06:37.331: INFO: Pod "busybox-readonly-false-01776d81-5572-4a3c-ad95-03670fb9513b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.647898ms Apr 22 14:06:39.335: INFO: Pod "busybox-readonly-false-01776d81-5572-4a3c-ad95-03670fb9513b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007144621s Apr 22 14:06:41.340: INFO: Pod "busybox-readonly-false-01776d81-5572-4a3c-ad95-03670fb9513b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011946445s Apr 22 14:06:41.340: INFO: Pod "busybox-readonly-false-01776d81-5572-4a3c-ad95-03670fb9513b" satisfied condition "Succeeded or Failed" [AfterEach] [sig-node] Security Context /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:06:41.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "security-context-test-8582" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":-1,"completed":56,"skipped":1209,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:06:41.493: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a service clusterip-service with the type=ClusterIP in namespace services-556 �[1mSTEP�[0m: Creating active service to test reachability when its FQDN is referred as externalName for another service �[1mSTEP�[0m: creating service externalsvc in namespace services-556 �[1mSTEP�[0m: creating replication controller externalsvc in namespace services-556 I0422 14:06:41.572837 17 runners.go:193] Created replication controller with name: externalsvc, namespace: services-556, replica count: 2 I0422 14:06:44.624286 17 runners.go:193] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady �[1mSTEP�[0m: changing the ClusterIP service to type=ExternalName Apr 22 14:06:44.640: INFO: Creating new exec pod Apr 22 14:06:46.656: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-556 exec execpodnfxp6 -- /bin/sh -x -c nslookup clusterip-service.services-556.svc.cluster.local' Apr 22 14:06:46.969: INFO: stderr: "+ nslookup clusterip-service.services-556.svc.cluster.local\n" Apr 22 14:06:46.969: INFO: stdout: "Server:\t\t10.128.0.10\nAddress:\t10.128.0.10#53\n\nclusterip-service.services-556.svc.cluster.local\tcanonical name = externalsvc.services-556.svc.cluster.local.\nName:\texternalsvc.services-556.svc.cluster.local\nAddress: 10.137.44.63\n\n" �[1mSTEP�[0m: deleting ReplicationController externalsvc in namespace services-556, will wait for the garbage collector to delete the pods Apr 22 14:06:47.029: INFO: Deleting ReplicationController externalsvc took: 5.414052ms Apr 22 14:06:47.129: INFO: Terminating ReplicationController externalsvc pods took: 100.595305ms Apr 22 14:06:49.240: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:06:49.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-556" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":-1,"completed":57,"skipped":1304,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:06:49.281: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Apr 22 14:06:49.313: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 22 14:06:51.317: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Apr 22 14:06:51.328: INFO: The status of Pod pod-with-prestop-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 22 14:06:53.332: INFO: The status of Pod pod-with-prestop-http-hook is Running (Ready = true) �[1mSTEP�[0m: delete the pod with lifecycle hook Apr 22 14:06:53.342: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 14:06:53.345: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 14:06:55.345: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 14:06:55.348: INFO: Pod pod-with-prestop-http-hook still exists Apr 22 14:06:57.346: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 22 14:06:57.349: INFO: Pod pod-with-prestop-http-hook no longer exists �[1mSTEP�[0m: check prestop hook [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:06:57.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-8576" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":-1,"completed":58,"skipped":1322,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:06:57.439: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: setting up watch �[1mSTEP�[0m: submitting the pod to kubernetes Apr 22 14:06:57.464: INFO: observed the pod list �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: verifying pod creation was observed �[1mSTEP�[0m: deleting the pod gracefully �[1mSTEP�[0m: verifying pod deletion was observed [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:07:01.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-2408" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":-1,"completed":59,"skipped":1371,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:07:01.975: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename resourcequota �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Counting existing ResourceQuota �[1mSTEP�[0m: Creating a ResourceQuota �[1mSTEP�[0m: Ensuring resource quota status is calculated �[1mSTEP�[0m: Creating a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status captures replicaset creation �[1mSTEP�[0m: Deleting a ReplicaSet �[1mSTEP�[0m: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:07:13.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "resourcequota-6182" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":-1,"completed":60,"skipped":1376,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:07:13.066: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-c6e0aaa0-4feb-4f5c-9070-14eb4fffcc4a �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 22 14:07:13.118: INFO: Waiting up to 5m0s for pod "pod-secrets-296d2547-f8ca-4e56-a8a8-08b7cccf7281" in namespace "secrets-37" to be "Succeeded or Failed" Apr 22 14:07:13.121: INFO: Pod "pod-secrets-296d2547-f8ca-4e56-a8a8-08b7cccf7281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.665456ms Apr 22 14:07:15.124: INFO: Pod "pod-secrets-296d2547-f8ca-4e56-a8a8-08b7cccf7281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00644276s Apr 22 14:07:17.128: INFO: Pod "pod-secrets-296d2547-f8ca-4e56-a8a8-08b7cccf7281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010557986s �[1mSTEP�[0m: Saw pod success Apr 22 14:07:17.129: INFO: Pod "pod-secrets-296d2547-f8ca-4e56-a8a8-08b7cccf7281" satisfied condition "Succeeded or Failed" Apr 22 14:07:17.131: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-secrets-296d2547-f8ca-4e56-a8a8-08b7cccf7281 container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 14:07:17.148: INFO: Waiting for pod pod-secrets-296d2547-f8ca-4e56-a8a8-08b7cccf7281 to disappear Apr 22 14:07:17.151: INFO: Pod pod-secrets-296d2547-f8ca-4e56-a8a8-08b7cccf7281 no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:07:17.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-37" for this suite. �[1mSTEP�[0m: Destroying namespace "secret-namespace-9422" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":-1,"completed":61,"skipped":1383,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:07:17.169: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if Kubernetes control plane services is included in cluster-info [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: validating cluster-info Apr 22 14:07:17.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-3947 cluster-info' Apr 22 14:07:17.266: INFO: stderr: "" Apr 22 14:07:17.266: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://172.18.0.3:6443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:07:17.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-3947" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info [Conformance]","total":-1,"completed":62,"skipped":1386,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:07:17.290: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename tables �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:07:17.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "tables-3810" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":-1,"completed":63,"skipped":1396,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:07:17.351: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename container-lifecycle-hook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:53 �[1mSTEP�[0m: create the container to handle the HTTPGet hook request. Apr 22 14:07:17.384: INFO: The status of Pod pod-handle-http-request is Pending, waiting for it to be Running (with Ready = true) Apr 22 14:07:19.389: INFO: The status of Pod pod-handle-http-request is Running (Ready = true) [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: create the pod with lifecycle hook Apr 22 14:07:19.401: INFO: The status of Pod pod-with-poststart-http-hook is Pending, waiting for it to be Running (with Ready = true) Apr 22 14:07:21.405: INFO: The status of Pod pod-with-poststart-http-hook is Running (Ready = true) �[1mSTEP�[0m: check poststart hook �[1mSTEP�[0m: delete the pod with lifecycle hook Apr 22 14:07:21.443: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 14:07:21.450: INFO: Pod pod-with-poststart-http-hook still exists Apr 22 14:07:23.451: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 14:07:23.454: INFO: Pod pod-with-poststart-http-hook still exists Apr 22 14:07:25.452: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 22 14:07:25.455: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [sig-node] Container Lifecycle Hook /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:07:25.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "container-lifecycle-hook-5656" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":-1,"completed":64,"skipped":1413,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:07:25.477: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 14:07:25.874: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 14:07:28.894: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API Apr 22 14:07:38.916: INFO: Waiting for webhook configuration to be ready... Apr 22 14:07:49.028: INFO: Waiting for webhook configuration to be ready... Apr 22 14:07:59.129: INFO: Waiting for webhook configuration to be ready... Apr 22 14:08:09.227: INFO: Waiting for webhook configuration to be ready... Apr 22 14:08:19.240: INFO: Waiting for webhook configuration to be ready... Apr 22 14:08:19.240: FAIL: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00033c2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred Full Stack Trace k8s.io/kubernetes/test/e2e/apimachinery.registerWebhook(0xc0007b2580, {0xc003f60da0, 0xc}, 0x0, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:887 +0x5ea k8s.io/kubernetes/test/e2e/apimachinery.glob..func27.4() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:195 +0x45 k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x2456919) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0000b5860, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:08:19.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-9870" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-9870-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[91m�[1m• Failure [53.822 seconds]�[0m [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23�[0m �[91m�[1mshould be able to deny pod and configmap creation [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 14:08:19.240: waiting for webhook configuration to be ready Unexpected error: <*errors.errorString | 0xc00033c2a0>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:887 �[90m------------------------------�[0m {"msg":"FAILED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":64,"skipped":1420,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:08:19.300: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 14:08:20.010: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Apr 22 14:08:22.020: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2022, time.April, 22, 14, 8, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 14, 8, 20, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2022, time.April, 22, 14, 8, 20, 0, time.Local), LastTransitionTime:time.Date(2022, time.April, 22, 14, 8, 20, 0, time.Local), Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-78948c58f6\" is progressing."}}, CollisionCount:(*int32)(nil)} �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 14:08:25.034: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Registering the webhook via the AdmissionRegistration API �[1mSTEP�[0m: create a pod that should be denied by the webhook �[1mSTEP�[0m: create a pod that causes the webhook to hang �[1mSTEP�[0m: create a configmap that should be denied by the webhook �[1mSTEP�[0m: create a configmap that should be admitted by the webhook �[1mSTEP�[0m: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook �[1mSTEP�[0m: create a namespace that bypass the webhook �[1mSTEP�[0m: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:08:35.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-8474" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-8474-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":-1,"completed":65,"skipped":1420,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:08:35.223: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating the pod Apr 22 14:08:35.271: INFO: The status of Pod annotationupdatee6d67955-a2c7-4b69-96bb-b3e2f014cc30 is Pending, waiting for it to be Running (with Ready = true) Apr 22 14:08:37.280: INFO: The status of Pod annotationupdatee6d67955-a2c7-4b69-96bb-b3e2f014cc30 is Running (Ready = true) Apr 22 14:08:37.800: INFO: Successfully updated pod "annotationupdatee6d67955-a2c7-4b69-96bb-b3e2f014cc30" [AfterEach] [sig-storage] Projected downwardAPI /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:08:41.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-9509" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":-1,"completed":66,"skipped":1429,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:08:41.840: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename subpath �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 �[1mSTEP�[0m: Setting up data [It] should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating pod pod-subpath-test-secret-xrpv �[1mSTEP�[0m: Creating a pod to test atomic-volume-subpath Apr 22 14:08:41.881: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xrpv" in namespace "subpath-4533" to be "Succeeded or Failed" Apr 22 14:08:41.883: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.391766ms Apr 22 14:08:43.888: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Running", Reason="", readiness=true. Elapsed: 2.007206906s Apr 22 14:08:45.892: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Running", Reason="", readiness=true. Elapsed: 4.011114726s Apr 22 14:08:47.898: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Running", Reason="", readiness=true. Elapsed: 6.016733973s Apr 22 14:08:49.902: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Running", Reason="", readiness=true. Elapsed: 8.021159501s Apr 22 14:08:51.907: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Running", Reason="", readiness=true. Elapsed: 10.025771947s Apr 22 14:08:53.912: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Running", Reason="", readiness=true. Elapsed: 12.031161343s Apr 22 14:08:55.916: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Running", Reason="", readiness=true. Elapsed: 14.035597042s Apr 22 14:08:57.921: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Running", Reason="", readiness=true. Elapsed: 16.040256463s Apr 22 14:08:59.926: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Running", Reason="", readiness=true. Elapsed: 18.044681391s Apr 22 14:09:01.931: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Running", Reason="", readiness=true. Elapsed: 20.049706468s Apr 22 14:09:03.935: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Running", Reason="", readiness=false. Elapsed: 22.054509238s Apr 22 14:09:05.940: INFO: Pod "pod-subpath-test-secret-xrpv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.059039599s �[1mSTEP�[0m: Saw pod success Apr 22 14:09:05.940: INFO: Pod "pod-subpath-test-secret-xrpv" satisfied condition "Succeeded or Failed" Apr 22 14:09:05.943: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod pod-subpath-test-secret-xrpv container test-container-subpath-secret-xrpv: <nil> �[1mSTEP�[0m: delete the pod Apr 22 14:09:05.965: INFO: Waiting for pod pod-subpath-test-secret-xrpv to disappear Apr 22 14:09:05.969: INFO: Pod pod-subpath-test-secret-xrpv no longer exists �[1mSTEP�[0m: Deleting pod pod-subpath-test-secret-xrpv Apr 22 14:09:05.969: INFO: Deleting pod "pod-subpath-test-secret-xrpv" in namespace "subpath-4533" [AfterEach] [sig-storage] Subpath /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:05.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "subpath-4533" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [Excluded:WindowsDocker] [Conformance]","total":-1,"completed":67,"skipped":1430,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:05.991: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename emptydir-wrapper �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should not conflict [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 14:09:06.030: INFO: The status of Pod pod-secrets-3f548da3-0cc4-4b6e-bd8d-ddf98fdbb9ac is Pending, waiting for it to be Running (with Ready = true) Apr 22 14:09:08.035: INFO: The status of Pod pod-secrets-3f548da3-0cc4-4b6e-bd8d-ddf98fdbb9ac is Running (Ready = true) �[1mSTEP�[0m: Cleaning up the secret �[1mSTEP�[0m: Cleaning up the configmap �[1mSTEP�[0m: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:08.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "emptydir-wrapper-893" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":-1,"completed":68,"skipped":1436,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} [BeforeEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:08.065: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename events �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should delete a collection of events [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create set of events Apr 22 14:09:08.090: INFO: created test-event-1 Apr 22 14:09:08.095: INFO: created test-event-2 Apr 22 14:09:08.099: INFO: created test-event-3 �[1mSTEP�[0m: get a list of Events with a label in the current namespace �[1mSTEP�[0m: delete collection of events Apr 22 14:09:08.102: INFO: requesting DeleteCollection of events �[1mSTEP�[0m: check that the list of events matches the requested quantity Apr 22 14:09:08.115: INFO: requesting list of events to confirm quantity [AfterEach] [sig-instrumentation] Events /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:08.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "events-5543" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":-1,"completed":69,"skipped":1436,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:08.140: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 14:09:08.167: INFO: The status of Pod busybox-scheduling-a5cb6afc-7f48-480f-bd60-db60822547b0 is Pending, waiting for it to be Running (with Ready = true) Apr 22 14:09:10.172: INFO: The status of Pod busybox-scheduling-a5cb6afc-7f48-480f-bd60-db60822547b0 is Running (Ready = true) [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:10.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-5220" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":-1,"completed":70,"skipped":1445,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:10.205: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 14:09:10.242: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 22 14:09:10.253: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 22 14:09:15.258: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running Apr 22 14:09:15.258: INFO: Creating deployment "test-rolling-update-deployment" Apr 22 14:09:15.266: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 22 14:09:15.276: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 22 14:09:17.284: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 22 14:09:17.287: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 22 14:09:17.296: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-3537 d5724414-f5e0-4186-8953-87601e0b3147 15615 1 2022-04-22 14:09:15 +0000 UTC <nil> <nil> map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2022-04-22 14:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:progressDeadlineSeconds":{},"f:replicas":{},"f:revisionHistoryLimit":{},"f:selector":{},"f:strategy":{"f:rollingUpdate":{".":{},"f:maxSurge":{},"f:maxUnavailable":{}},"f:type":{}},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 14:09:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}}},"f:status":{"f:availableReplicas":{},"f:conditions":{".":{},"k:{\"type\":\"Available\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Progressing\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{},"f:updatedReplicas":{}}} status}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc00377e848 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2022-04-22 14:09:15 +0000 UTC,LastTransitionTime:2022-04-22 14:09:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-796dbc4547" has successfully progressed.,LastUpdateTime:2022-04-22 14:09:16 +0000 UTC,LastTransitionTime:2022-04-22 14:09:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Apr 22 14:09:17.301: INFO: New ReplicaSet "test-rolling-update-deployment-796dbc4547" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-796dbc4547 deployment-3537 bc6135a7-18a2-46d1-87f3-d88fdd51117b 15605 1 2022-04-22 14:09:15 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:796dbc4547] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment d5724414-f5e0-4186-8953-87601e0b3147 0xc002b745e7 0xc002b745e8}] [] [{kube-controller-manager Update apps/v1 2022-04-22 14:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5724414-f5e0-4186-8953-87601e0b3147\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 14:09:16 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 796dbc4547,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:796dbc4547] map[] [] [] []} {[] [] [{agnhost k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc002b74698 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Apr 22 14:09:17.301: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 22 14:09:17.301: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-3537 be7623e6-7600-4945-ab59-71d8db7d9c1d 15614 2 2022-04-22 14:09:10 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment d5724414-f5e0-4186-8953-87601e0b3147 0xc002b744b7 0xc002b744b8}] [] [{e2e.test Update apps/v1 2022-04-22 14:09:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 14:09:16 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d5724414-f5e0-4186-8953-87601e0b3147\"}":{}}},"f:spec":{"f:replicas":{}}} } {kube-controller-manager Update apps/v1 2022-04-22 14:09:16 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002b74578 <nil> ClusterFirst map[] <nil> false false false <nil> PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 14:09:17.305: INFO: Pod "test-rolling-update-deployment-796dbc4547-7kxmx" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-796dbc4547-7kxmx test-rolling-update-deployment-796dbc4547- deployment-3537 8f934410-6322-4e4c-a255-6b5e92779abe 15604 0 2022-04-22 14:09:15 +0000 UTC <nil> <nil> map[name:sample-pod pod-template-hash:796dbc4547] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-796dbc4547 bc6135a7-18a2-46d1-87f3-d88fdd51117b 0xc000e3e207 0xc000e3e208}] [] [{kube-controller-manager Update v1 2022-04-22 14:09:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bc6135a7-18a2-46d1-87f3-d88fdd51117b\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"agnhost\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-22 14:09:16 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.124\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-tz7kf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tz7kf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:09:15 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:09:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:09:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:09:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.124,StartTime:2022-04-22 14:09:15 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 14:09:15 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/agnhost:2.33,ImageID:k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43,ContainerID:containerd://3535111a664cb5de0706391e2b38437b9a1e0be0bb3242c80f3ae1f4bb9032af,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.124,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:17.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-3537" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":-1,"completed":71,"skipped":1445,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:17.334: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename svcaccounts �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: getting the auto-created API token Apr 22 14:09:17.880: INFO: created pod pod-service-account-defaultsa Apr 22 14:09:17.881: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 22 14:09:17.886: INFO: created pod pod-service-account-mountsa Apr 22 14:09:17.886: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 22 14:09:17.894: INFO: created pod pod-service-account-nomountsa Apr 22 14:09:17.894: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 22 14:09:17.906: INFO: created pod pod-service-account-defaultsa-mountspec Apr 22 14:09:17.906: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 22 14:09:17.918: INFO: created pod pod-service-account-mountsa-mountspec Apr 22 14:09:17.918: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 22 14:09:17.927: INFO: created pod pod-service-account-nomountsa-mountspec Apr 22 14:09:17.927: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 22 14:09:17.941: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 22 14:09:17.942: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 22 14:09:17.951: INFO: created pod pod-service-account-mountsa-nomountspec Apr 22 14:09:17.951: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 22 14:09:17.956: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 22 14:09:17.956: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:17.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "svcaccounts-7732" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":-1,"completed":72,"skipped":1456,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:18.040: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename projected �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating projection with secret that has name projected-secret-test-e7ef25a5-5b97-4e4f-9165-a41e502423c1 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 22 14:09:18.077: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f36a0cb-6bdb-4820-8968-c804db2df7c7" in namespace "projected-849" to be "Succeeded or Failed" Apr 22 14:09:18.080: INFO: Pod "pod-projected-secrets-9f36a0cb-6bdb-4820-8968-c804db2df7c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.613807ms Apr 22 14:09:20.085: INFO: Pod "pod-projected-secrets-9f36a0cb-6bdb-4820-8968-c804db2df7c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007933684s Apr 22 14:09:22.088: INFO: Pod "pod-projected-secrets-9f36a0cb-6bdb-4820-8968-c804db2df7c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011616389s �[1mSTEP�[0m: Saw pod success Apr 22 14:09:22.088: INFO: Pod "pod-projected-secrets-9f36a0cb-6bdb-4820-8968-c804db2df7c7" satisfied condition "Succeeded or Failed" Apr 22 14:09:22.091: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-worker-3u7awl pod pod-projected-secrets-9f36a0cb-6bdb-4820-8968-c804db2df7c7 container projected-secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 14:09:22.105: INFO: Waiting for pod pod-projected-secrets-9f36a0cb-6bdb-4820-8968-c804db2df7c7 to disappear Apr 22 14:09:22.107: INFO: Pod pod-projected-secrets-9f36a0cb-6bdb-4820-8968-c804db2df7c7 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:22.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "projected-849" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":-1,"completed":73,"skipped":1491,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:22.130: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be immutable if `immutable` field is set [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:22.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-1321" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":-1,"completed":74,"skipped":1501,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:22.201: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 14:09:22.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4047 create -f -' Apr 22 14:09:22.509: INFO: stderr: "" Apr 22 14:09:22.509: INFO: stdout: "replicationcontroller/agnhost-primary created\n" Apr 22 14:09:22.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4047 create -f -' Apr 22 14:09:22.726: INFO: stderr: "" Apr 22 14:09:22.726: INFO: stdout: "service/agnhost-primary created\n" �[1mSTEP�[0m: Waiting for Agnhost primary to start. Apr 22 14:09:23.730: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 14:09:23.730: INFO: Found 1 / 1 Apr 22 14:09:23.730: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 22 14:09:23.733: INFO: Selector matched 1 pods for map[app:agnhost] Apr 22 14:09:23.733: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 22 14:09:23.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4047 describe pod agnhost-primary-tltr4' Apr 22 14:09:23.818: INFO: stderr: "" Apr 22 14:09:23.818: INFO: stdout: "Name: agnhost-primary-tltr4\nNamespace: kubectl-4047\nPriority: 0\nNode: k8s-upgrade-and-conformance-7gf7we-worker-3u7awl/172.18.0.6\nStart Time: Fri, 22 Apr 2022 14:09:22 +0000\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nStatus: Running\nIP: 192.168.3.78\nIPs:\n IP: 192.168.3.78\nControlled By: ReplicationController/agnhost-primary\nContainers:\n agnhost-primary:\n Container ID: containerd://9a86acda743783fbb5e4f6c8dd60e53ed951e5883141a8f9eeb7bcca245b8e3e\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Image ID: k8s.gcr.io/e2e-test-images/agnhost@sha256:5b3a9f1c71c09c00649d8374224642ff7029ce91a721ec9132e6ed45fa73fd43\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 22 Apr 2022 14:09:23 +0000\n Ready: True\n Restart Count: 0\n Environment: <none>\n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lwxp8 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n kube-api-access-lwxp8:\n Type: Projected (a volume that contains injected data from multiple sources)\n TokenExpirationSeconds: 3607\n ConfigMapName: kube-root-ca.crt\n ConfigMapOptional: <nil>\n DownwardAPI: true\nQoS Class: BestEffort\nNode-Selectors: <none>\nTolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 1s default-scheduler Successfully assigned kubectl-4047/agnhost-primary-tltr4 to k8s-upgrade-and-conformance-7gf7we-worker-3u7awl\n Normal Pulled 0s kubelet Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.33\" already present on machine\n Normal Created 0s kubelet Created container agnhost-primary\n Normal Started 0s kubelet Started container agnhost-primary\n" Apr 22 14:09:23.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4047 describe rc agnhost-primary' Apr 22 14:09:23.932: INFO: stderr: "" Apr 22 14:09:23.933: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4047\nSelector: app=agnhost,role=primary\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=primary\n Containers:\n agnhost-primary:\n Image: k8s.gcr.io/e2e-test-images/agnhost:2.33\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: <none>\n Mounts: <none>\n Volumes: <none>\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 1s replication-controller Created pod: agnhost-primary-tltr4\n" Apr 22 14:09:23.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4047 describe service agnhost-primary' Apr 22 14:09:24.019: INFO: stderr: "" Apr 22 14:09:24.019: INFO: stdout: "Name: agnhost-primary\nNamespace: kubectl-4047\nLabels: app=agnhost\n role=primary\nAnnotations: <none>\nSelector: app=agnhost,role=primary\nType: ClusterIP\nIP Family Policy: SingleStack\nIP Families: IPv4\nIP: 10.131.34.129\nIPs: 10.131.34.129\nPort: <unset> 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 192.168.3.78:6379\nSession Affinity: None\nEvents: <none>\n" Apr 22 14:09:24.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4047 describe node k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr' Apr 22 14:09:24.128: INFO: stderr: "" Apr 22 14:09:24.128: INFO: stdout: "Name: k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr\nRoles: control-plane,master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr\n kubernetes.io/os=linux\n node-role.kubernetes.io/control-plane=\n node-role.kubernetes.io/master=\n node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations: cluster.x-k8s.io/cluster-name: k8s-upgrade-and-conformance-7gf7we\n cluster.x-k8s.io/cluster-namespace: k8s-upgrade-and-conformance-ee37ij\n cluster.x-k8s.io/machine: k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr\n cluster.x-k8s.io/owner-kind: KubeadmControlPlane\n cluster.x-k8s.io/owner-name: k8s-upgrade-and-conformance-7gf7we-control-plane\n kubeadm.alpha.kubernetes.io/cri-socket: /var/run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 22 Apr 2022 13:41:11 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr\n AcquireTime: <unset>\n RenewTime: Fri, 22 Apr 2022 14:09:23 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 22 Apr 2022 14:07:41 +0000 Fri, 22 Apr 2022 13:41:11 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 22 Apr 2022 14:07:41 +0000 Fri, 22 Apr 2022 13:41:11 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 22 Apr 2022 14:07:41 +0000 Fri, 22 Apr 2022 13:41:11 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 22 Apr 2022 14:07:41 +0000 Fri, 22 Apr 2022 13:42:09 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.9\n Hostname: k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr\nCapacity:\n cpu: 8\n ephemeral-storage: 253882800Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65865228Ki\n pods: 110\nAllocatable:\n cpu: 8\n ephemeral-storage: 253882800Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 65865228Ki\n pods: 110\nSystem Info:\n Machine ID: b2370f42582e440eb7a8be2d855465e3\n System UUID: e28de886-d36d-4371-a2a1-9c6f08bd8177\n Boot ID: e4a9b0be-4cf7-4a61-b199-a143ada523e9\n Kernel Version: 5.4.0-1061-gke\n OS Image: Ubuntu 20.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.5.1\n Kubelet Version: v1.23.6\n Kube-Proxy Version: v1.23.6\nPodCIDR: 192.168.6.0/24\nPodCIDRs: 192.168.6.0/24\nProviderID: docker:////k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr\nNon-terminated Pods: (6 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 28m\n kube-system kindnet-szdxg 100m (1%) 100m (1%) 50Mi (0%) 50Mi (0%) 28m\n kube-system kube-apiserver-k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr 250m (3%) 0 (0%) 0 (0%) 0 (0%) 28m\n kube-system kube-controller-manager-k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr 200m (2%) 0 (0%) 0 (0%) 0 (0%) 28m\n kube-system kube-proxy-khvt8 0 (0%) 0 (0%) 0 (0%) 0 (0%) 26m\n kube-system kube-scheduler-k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr 100m (1%) 0 (0%) 0 (0%) 0 (0%) 28m\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 750m (9%) 100m (1%)\n memory 150Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Starting 27m kube-proxy \n Normal Starting 26m kube-proxy \n Normal Starting 28m kubelet Starting kubelet.\n Normal NodeHasSufficientMemory 28m (x2 over 28m) kubelet Node k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr status is now: NodeHasSufficientMemory\n Normal NodeHasSufficientPID 28m (x2 over 28m) kubelet Node k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr status is now: NodeHasSufficientPID\n Warning CheckLimitsForResolvConf 28m kubelet Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n Normal NodeAllocatableEnforced 28m kubelet Updated Node Allocatable limit across pods\n Warning InvalidDiskCapacity 28m kubelet invalid capacity 0 on image filesystem\n Normal NodeHasNoDiskPressure 28m (x2 over 28m) kubelet Node k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr status is now: NodeHasNoDiskPressure\n Normal NodeReady 27m kubelet Node k8s-upgrade-and-conformance-7gf7we-control-plane-jsdzr status is now: NodeReady\n" Apr 22 14:09:24.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-4047 describe namespace kubectl-4047' Apr 22 14:09:24.203: INFO: stderr: "" Apr 22 14:09:24.203: INFO: stdout: "Name: kubectl-4047\nLabels: e2e-framework=kubectl\n e2e-run=8b1e114b-4930-46de-ba37-b140f0705bbd\n kubernetes.io/metadata.name=kubectl-4047\nAnnotations: <none>\nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:24.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-4047" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":-1,"completed":75,"skipped":1509,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:24.234: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1573 [It] should update a single-container pod's image [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: running the image k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 Apr 22 14:09:24.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8359 run e2e-test-httpd-pod --image=k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 --pod-running-timeout=2m0s --labels=run=e2e-test-httpd-pod' Apr 22 14:09:24.330: INFO: stderr: "" Apr 22 14:09:24.330: INFO: stdout: "pod/e2e-test-httpd-pod created\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod is running �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod was created Apr 22 14:09:29.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8359 get pod e2e-test-httpd-pod -o json' Apr 22 14:09:29.518: INFO: stderr: "" Apr 22 14:09:29.518: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2022-04-22T14:09:24Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-8359\",\n \"resourceVersion\": \"15813\",\n \"uid\": \"6d718973-eb70-4e6f-bcfa-57ac5841c47d\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"kube-api-access-q6kl9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-kmb2d\",\n \"preemptionPolicy\": \"PreemptLowerPriority\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"kube-api-access-q6kl9\",\n \"projected\": {\n \"defaultMode\": 420,\n \"sources\": [\n {\n \"serviceAccountToken\": {\n \"expirationSeconds\": 3607,\n \"path\": \"token\"\n }\n },\n {\n \"configMap\": {\n \"items\": [\n {\n \"key\": \"ca.crt\",\n \"path\": \"ca.crt\"\n }\n ],\n \"name\": \"kube-root-ca.crt\"\n }\n },\n {\n \"downwardAPI\": {\n \"items\": [\n {\n \"fieldRef\": {\n \"apiVersion\": \"v1\",\n \"fieldPath\": \"metadata.namespace\"\n },\n \"path\": \"namespace\"\n }\n ]\n }\n }\n ]\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-22T14:09:24Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-22T14:09:25Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-22T14:09:25Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2022-04-22T14:09:24Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://56c62a4ae3ff5ab3fe43e5ba3459ae53aeb03d6cab9da6444b05ccdf37ffc033\",\n \"image\": \"k8s.gcr.io/e2e-test-images/httpd:2.4.38-2\",\n \"imageID\": \"k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2022-04-22T14:09:25Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.4\",\n \"phase\": \"Running\",\n \"podIP\": \"192.168.0.61\",\n \"podIPs\": [\n {\n \"ip\": \"192.168.0.61\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2022-04-22T14:09:24Z\"\n }\n}\n" �[1mSTEP�[0m: replace the image in the pod Apr 22 14:09:29.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8359 replace -f -' Apr 22 14:09:29.754: INFO: stderr: "" Apr 22 14:09:29.754: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" �[1mSTEP�[0m: verifying the pod e2e-test-httpd-pod has the right image k8s.gcr.io/e2e-test-images/busybox:1.29-2 [AfterEach] Kubectl replace /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1577 Apr 22 14:09:29.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-8359 delete pods e2e-test-httpd-pod' Apr 22 14:09:31.956: INFO: stderr: "" Apr 22 14:09:31.956: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:31.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-8359" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":-1,"completed":76,"skipped":1526,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:32.055: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename crd-publish-openapi �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 14:09:32.092: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: client-side validation (kubectl create and apply) allows request with any unknown properties Apr 22 14:09:34.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-1023 --namespace=crd-publish-openapi-1023 create -f -' Apr 22 14:09:35.109: INFO: stderr: "" Apr 22 14:09:35.109: INFO: stdout: "e2e-test-crd-publish-openapi-2326-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 22 14:09:35.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-1023 --namespace=crd-publish-openapi-1023 delete e2e-test-crd-publish-openapi-2326-crds test-cr' Apr 22 14:09:35.183: INFO: stderr: "" Apr 22 14:09:35.183: INFO: stdout: "e2e-test-crd-publish-openapi-2326-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Apr 22 14:09:35.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-1023 --namespace=crd-publish-openapi-1023 apply -f -' Apr 22 14:09:35.387: INFO: stderr: "" Apr 22 14:09:35.387: INFO: stdout: "e2e-test-crd-publish-openapi-2326-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Apr 22 14:09:35.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-1023 --namespace=crd-publish-openapi-1023 delete e2e-test-crd-publish-openapi-2326-crds test-cr' Apr 22 14:09:35.461: INFO: stderr: "" Apr 22 14:09:35.461: INFO: stdout: "e2e-test-crd-publish-openapi-2326-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" �[1mSTEP�[0m: kubectl explain works to explain CR without validation schema Apr 22 14:09:35.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=crd-publish-openapi-1023 explain e2e-test-crd-publish-openapi-2326-crds' Apr 22 14:09:35.635: INFO: stderr: "" Apr 22 14:09:35.635: INFO: stdout: "KIND: e2e-test-crd-publish-openapi-2326-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n <empty>\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:37.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "crd-publish-openapi-1023" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":-1,"completed":77,"skipped":1562,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":6,"skipped":149,"failed":3,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:02:21.803: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should scale a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Apr 22 14:02:21.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 create -f -' Apr 22 14:02:22.023: INFO: stderr: "" Apr 22 14:02:22.023: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" �[1mSTEP�[0m: waiting for all containers in name=update-demo pods to come up. Apr 22 14:02:22.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 14:02:22.115: INFO: stderr: "" Apr 22 14:02:22.115: INFO: stdout: "update-demo-nautilus-54vb7 update-demo-nautilus-j5p9p " Apr 22 14:02:22.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 get pods update-demo-nautilus-54vb7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 14:02:22.188: INFO: stderr: "" Apr 22 14:02:22.188: INFO: stdout: "" Apr 22 14:02:22.188: INFO: update-demo-nautilus-54vb7 is created but not running Apr 22 14:02:27.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 14:02:27.262: INFO: stderr: "" Apr 22 14:02:27.262: INFO: stdout: "update-demo-nautilus-54vb7 update-demo-nautilus-j5p9p " Apr 22 14:02:27.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 get pods update-demo-nautilus-54vb7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 14:02:27.331: INFO: stderr: "" Apr 22 14:02:27.331: INFO: stdout: "true" Apr 22 14:02:27.331: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 get pods update-demo-nautilus-54vb7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 14:02:27.402: INFO: stderr: "" Apr 22 14:02:27.402: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 14:02:27.402: INFO: validating pod update-demo-nautilus-54vb7 Apr 22 14:06:01.769: INFO: update-demo-nautilus-54vb7 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-54vb7) Apr 22 14:06:06.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo' Apr 22 14:06:06.855: INFO: stderr: "" Apr 22 14:06:06.855: INFO: stdout: "update-demo-nautilus-54vb7 update-demo-nautilus-j5p9p " Apr 22 14:06:06.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 get pods update-demo-nautilus-54vb7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}}' Apr 22 14:06:06.926: INFO: stderr: "" Apr 22 14:06:06.926: INFO: stdout: "true" Apr 22 14:06:06.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 get pods update-demo-nautilus-54vb7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}}' Apr 22 14:06:06.993: INFO: stderr: "" Apr 22 14:06:06.993: INFO: stdout: "k8s.gcr.io/e2e-test-images/nautilus:1.5" Apr 22 14:06:06.993: INFO: validating pod update-demo-nautilus-54vb7 Apr 22 14:09:40.905: INFO: update-demo-nautilus-54vb7 is running right image but validator function failed: the server is currently unable to handle the request (get pods update-demo-nautilus-54vb7) Apr 22 14:09:45.906: FAIL: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state Full Stack Trace k8s.io/kubernetes/test/e2e/kubectl.glob..func1.6.3() /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 +0x22f k8s.io/kubernetes/test/e2e.RunE2ETests(0x24e2677) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:133 +0x697 k8s.io/kubernetes/test/e2e.TestE2E(0x0) _output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:136 +0x19 testing.tRunner(0xc0004c4d00, 0x73a1f18) /usr/local/go/src/testing/testing.go:1259 +0x102 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:1306 +0x35a �[1mSTEP�[0m: using delete to clean up resources Apr 22 14:09:45.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 delete --grace-period=0 --force -f -' Apr 22 14:09:45.981: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 22 14:09:45.981: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 22 14:09:45.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 get rc,svc -l name=update-demo --no-headers' Apr 22 14:09:46.082: INFO: stderr: "No resources found in kubectl-1335 namespace.\n" Apr 22 14:09:46.082: INFO: stdout: "" Apr 22 14:09:46.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-1335 get pods -l name=update-demo -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 22 14:09:46.176: INFO: stderr: "" Apr 22 14:09:46.176: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:46.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-1335" for this suite. �[91m�[1m• Failure [444.383 seconds]�[0m [sig-cli] Kubectl client �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23�[0m Update Demo �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:294�[0m �[91m�[1mshould scale a replication controller [Conformance] [It]�[0m �[90m/workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633�[0m �[91mApr 22 14:09:45.906: Timed out after 300 seconds waiting for name=update-demo pods to reach valid state�[0m /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:327 �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":-1,"completed":6,"skipped":149,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:37.855: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: submitting the pod to kubernetes Apr 22 14:09:37.885: INFO: The status of Pod pod-update-activedeadlineseconds-79adf5bb-2be9-4d3f-9723-c67eeb7da9c1 is Pending, waiting for it to be Running (with Ready = true) Apr 22 14:09:39.890: INFO: The status of Pod pod-update-activedeadlineseconds-79adf5bb-2be9-4d3f-9723-c67eeb7da9c1 is Running (Ready = true) �[1mSTEP�[0m: verifying the pod is in kubernetes �[1mSTEP�[0m: updating the pod Apr 22 14:09:40.408: INFO: Successfully updated pod "pod-update-activedeadlineseconds-79adf5bb-2be9-4d3f-9723-c67eeb7da9c1" Apr 22 14:09:40.408: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-79adf5bb-2be9-4d3f-9723-c67eeb7da9c1" in namespace "pods-6398" to be "terminated due to deadline exceeded" Apr 22 14:09:40.417: INFO: Pod "pod-update-activedeadlineseconds-79adf5bb-2be9-4d3f-9723-c67eeb7da9c1": Phase="Running", Reason="", readiness=true. Elapsed: 9.466298ms Apr 22 14:09:42.422: INFO: Pod "pod-update-activedeadlineseconds-79adf5bb-2be9-4d3f-9723-c67eeb7da9c1": Phase="Running", Reason="", readiness=true. Elapsed: 2.014082471s Apr 22 14:09:44.426: INFO: Pod "pod-update-activedeadlineseconds-79adf5bb-2be9-4d3f-9723-c67eeb7da9c1": Phase="Running", Reason="", readiness=true. Elapsed: 4.017621546s Apr 22 14:09:46.430: INFO: Pod "pod-update-activedeadlineseconds-79adf5bb-2be9-4d3f-9723-c67eeb7da9c1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 6.021961465s Apr 22 14:09:46.430: INFO: Pod "pod-update-activedeadlineseconds-79adf5bb-2be9-4d3f-9723-c67eeb7da9c1" satisfied condition "terminated due to deadline exceeded" [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:46.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-6398" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":-1,"completed":78,"skipped":1591,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:46.232: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename secrets �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating secret with name secret-test-map-1d55c3fe-58d0-41a5-8394-b81297792d40 �[1mSTEP�[0m: Creating a pod to test consume secrets Apr 22 14:09:46.261: INFO: Waiting up to 5m0s for pod "pod-secrets-ca7b338a-625e-4538-91a0-f3861aeb3f2a" in namespace "secrets-9852" to be "Succeeded or Failed" Apr 22 14:09:46.264: INFO: Pod "pod-secrets-ca7b338a-625e-4538-91a0-f3861aeb3f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.215428ms Apr 22 14:09:48.269: INFO: Pod "pod-secrets-ca7b338a-625e-4538-91a0-f3861aeb3f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008284495s Apr 22 14:09:50.272: INFO: Pod "pod-secrets-ca7b338a-625e-4538-91a0-f3861aeb3f2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011090702s �[1mSTEP�[0m: Saw pod success Apr 22 14:09:50.272: INFO: Pod "pod-secrets-ca7b338a-625e-4538-91a0-f3861aeb3f2a" satisfied condition "Succeeded or Failed" Apr 22 14:09:50.274: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-secrets-ca7b338a-625e-4538-91a0-f3861aeb3f2a container secret-volume-test: <nil> �[1mSTEP�[0m: delete the pod Apr 22 14:09:50.288: INFO: Waiting for pod pod-secrets-ca7b338a-625e-4538-91a0-f3861aeb3f2a to disappear Apr 22 14:09:50.290: INFO: Pod pod-secrets-ca7b338a-625e-4538-91a0-f3861aeb3f2a no longer exists [AfterEach] [sig-storage] Secrets /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:50.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "secrets-9852" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":7,"skipped":180,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:50.304: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-node] Kubelet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:50.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-8666" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":-1,"completed":8,"skipped":184,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:50.390: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename webhook �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 �[1mSTEP�[0m: Setting up server cert �[1mSTEP�[0m: Create role binding to let webhook read extension-apiserver-authentication �[1mSTEP�[0m: Deploying the webhook pod �[1mSTEP�[0m: Wait for the deployment to be ready Apr 22 14:09:51.010: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set �[1mSTEP�[0m: Deploying the webhook service �[1mSTEP�[0m: Verifying the service has paired with the endpoint Apr 22 14:09:54.034: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 Apr 22 14:09:54.037: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Registering the mutating webhook for custom resource e2e-test-webhook-4837-crds.webhook.example.com via the AdmissionRegistration API �[1mSTEP�[0m: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:09:57.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "webhook-6513" for this suite. �[1mSTEP�[0m: Destroying namespace "webhook-6513-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":-1,"completed":9,"skipped":203,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:09:57.209: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-e38da304-b6e9-4958-825c-b38f295243d4 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 14:09:57.261: INFO: Waiting up to 5m0s for pod "pod-configmaps-d4071446-0a7f-4ea7-9c74-2b5e42d90764" in namespace "configmap-6602" to be "Succeeded or Failed" Apr 22 14:09:57.266: INFO: Pod "pod-configmaps-d4071446-0a7f-4ea7-9c74-2b5e42d90764": Phase="Pending", Reason="", readiness=false. Elapsed: 4.191453ms Apr 22 14:09:59.269: INFO: Pod "pod-configmaps-d4071446-0a7f-4ea7-9c74-2b5e42d90764": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008089639s Apr 22 14:10:01.274: INFO: Pod "pod-configmaps-d4071446-0a7f-4ea7-9c74-2b5e42d90764": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013017645s �[1mSTEP�[0m: Saw pod success Apr 22 14:10:01.274: INFO: Pod "pod-configmaps-d4071446-0a7f-4ea7-9c74-2b5e42d90764" satisfied condition "Succeeded or Failed" Apr 22 14:10:01.278: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-configmaps-d4071446-0a7f-4ea7-9c74-2b5e42d90764 container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 14:10:01.301: INFO: Waiting for pod pod-configmaps-d4071446-0a7f-4ea7-9c74-2b5e42d90764 to disappear Apr 22 14:10:01.304: INFO: Pod pod-configmaps-d4071446-0a7f-4ea7-9c74-2b5e42d90764 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:01.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-6602" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":-1,"completed":10,"skipped":204,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:10:01.317: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename deployment �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:89 [It] should run the lifecycle of a Deployment [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a Deployment �[1mSTEP�[0m: waiting for Deployment to be created �[1mSTEP�[0m: waiting for all Replicas to be Ready Apr 22 14:10:01.350: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 14:10:01.350: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 14:10:01.354: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 14:10:01.354: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 14:10:01.367: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 14:10:01.367: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 14:10:01.397: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 14:10:01.397: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 and labels map[test-deployment-static:true] Apr 22 14:10:02.365: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 and labels map[test-deployment-static:true] Apr 22 14:10:02.365: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 and labels map[test-deployment-static:true] Apr 22 14:10:02.650: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 and labels map[test-deployment-static:true] �[1mSTEP�[0m: patching the Deployment Apr 22 14:10:02.656: INFO: observed event type ADDED �[1mSTEP�[0m: waiting for Replicas to scale Apr 22 14:10:02.659: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 Apr 22 14:10:02.659: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 Apr 22 14:10:02.659: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 Apr 22 14:10:02.660: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 Apr 22 14:10:02.660: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 Apr 22 14:10:02.660: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 Apr 22 14:10:02.660: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 Apr 22 14:10:02.660: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 0 Apr 22 14:10:02.660: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 Apr 22 14:10:02.660: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 Apr 22 14:10:02.660: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:02.660: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:02.660: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:02.660: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:02.664: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:02.665: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:02.677: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:02.677: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:02.689: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 Apr 22 14:10:02.689: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 Apr 22 14:10:02.702: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 Apr 22 14:10:02.702: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 Apr 22 14:10:03.671: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:03.671: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:03.683: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 �[1mSTEP�[0m: listing Deployments Apr 22 14:10:03.687: INFO: Found test-deployment with labels: map[test-deployment:patched test-deployment-static:true] �[1mSTEP�[0m: updating the Deployment Apr 22 14:10:03.697: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 �[1mSTEP�[0m: fetching the DeploymentStatus Apr 22 14:10:03.709: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 14:10:03.709: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 14:10:03.723: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 14:10:03.738: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 14:10:03.745: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 14:10:04.674: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 14:10:04.694: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 14:10:04.710: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 14:10:04.719: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 14:10:04.732: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 and labels map[test-deployment:updated test-deployment-static:true] Apr 22 14:10:06.387: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 3 and labels map[test-deployment:updated test-deployment-static:true] �[1mSTEP�[0m: patching the DeploymentStatus �[1mSTEP�[0m: fetching the DeploymentStatus Apr 22 14:10:06.421: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 Apr 22 14:10:06.422: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 Apr 22 14:10:06.422: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 Apr 22 14:10:06.422: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 Apr 22 14:10:06.422: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 1 Apr 22 14:10:06.422: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:06.422: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:06.422: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:06.422: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:06.422: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 2 Apr 22 14:10:06.423: INFO: observed Deployment test-deployment in namespace deployment-1818 with ReadyReplicas 3 �[1mSTEP�[0m: deleting the Deployment Apr 22 14:10:06.432: INFO: observed event type MODIFIED Apr 22 14:10:06.432: INFO: observed event type MODIFIED Apr 22 14:10:06.432: INFO: observed event type MODIFIED Apr 22 14:10:06.432: INFO: observed event type MODIFIED Apr 22 14:10:06.432: INFO: observed event type MODIFIED Apr 22 14:10:06.432: INFO: observed event type MODIFIED Apr 22 14:10:06.432: INFO: observed event type MODIFIED Apr 22 14:10:06.432: INFO: observed event type MODIFIED Apr 22 14:10:06.432: INFO: observed event type MODIFIED Apr 22 14:10:06.433: INFO: observed event type MODIFIED Apr 22 14:10:06.433: INFO: observed event type MODIFIED Apr 22 14:10:06.433: INFO: observed event type MODIFIED [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:83 Apr 22 14:10:06.435: INFO: Log out all the ReplicaSets if there is no deployment created Apr 22 14:10:06.444: INFO: ReplicaSet "test-deployment-5ddd8b47d8": &ReplicaSet{ObjectMeta:{test-deployment-5ddd8b47d8 deployment-1818 d43dabb2-290b-47f1-b905-0aaf9e27c8ce 16337 4 2022-04-22 14:10:02 +0000 UTC <nil> <nil> map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-deployment 42af76aa-4576-49f8-8602-f4364849f4da 0xc001aca567 0xc001aca568}] [] [{kube-controller-manager Update apps/v1 2022-04-22 14:10:02 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42af76aa-4576-49f8-8602-f4364849f4da\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 14:10:06 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 5ddd8b47d8,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/pause:3.6 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001aca5f0 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:4,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 14:10:06.453: INFO: pod: "test-deployment-5ddd8b47d8-8whpw": &Pod{ObjectMeta:{test-deployment-5ddd8b47d8-8whpw test-deployment-5ddd8b47d8- deployment-1818 dfcaca34-4c54-42c0-9da7-d003551f766b 16332 0 2022-04-22 14:10:02 +0000 UTC 2022-04-22 14:10:07 +0000 UTC 0xc001acaa68 map[pod-template-hash:5ddd8b47d8 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-5ddd8b47d8 d43dabb2-290b-47f1-b905-0aaf9e27c8ce 0xc001acaa97 0xc001acaa98}] [] [{kube-controller-manager Update v1 2022-04-22 14:10:02 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"d43dabb2-290b-47f1-b905-0aaf9e27c8ce\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-22 14:10:03 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.132\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8tggv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/pause:3.6,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8tggv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:02 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.132,StartTime:2022-04-22 14:10:02 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 14:10:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/pause:3.6,ImageID:k8s.gcr.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,ContainerID:containerd://94f12e00c2b568e7b67165ae407cf61177a8d7653a77723428962dd0aef527b5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.132,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 14:10:06.453: INFO: ReplicaSet "test-deployment-6cdc5bc678": &ReplicaSet{ObjectMeta:{test-deployment-6cdc5bc678 deployment-1818 7d5303ce-28cb-4fe3-85c4-e66c4b53a562 16253 3 2022-04-22 14:10:01 +0000 UTC <nil> <nil> map[pod-template-hash:6cdc5bc678 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-deployment 42af76aa-4576-49f8-8602-f4364849f4da 0xc001aca657 0xc001aca658}] [] [{kube-controller-manager Update apps/v1 2022-04-22 14:10:01 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42af76aa-4576-49f8-8602-f4364849f4da\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 14:10:03 +0000 UTC FieldsV1 {"f:status":{"f:observedGeneration":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 6cdc5bc678,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:6cdc5bc678 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/agnhost:2.33 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001aca6e0 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Apr 22 14:10:06.457: INFO: ReplicaSet "test-deployment-854fdc678": &ReplicaSet{ObjectMeta:{test-deployment-854fdc678 deployment-1818 7cb34fa7-5984-413a-8b3d-788bf74aaf81 16327 2 2022-04-22 14:10:03 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[deployment.kubernetes.io/desired-replicas:2 deployment.kubernetes.io/max-replicas:3 deployment.kubernetes.io/revision:3] [{apps/v1 Deployment test-deployment 42af76aa-4576-49f8-8602-f4364849f4da 0xc001aca747 0xc001aca748}] [] [{kube-controller-manager Update apps/v1 2022-04-22 14:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"42af76aa-4576-49f8-8602-f4364849f4da\"}":{}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}} } {kube-controller-manager Update apps/v1 2022-04-22 14:10:04 +0000 UTC FieldsV1 {"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}} status}]},Spec:ReplicaSetSpec{Replicas:*2,Selector:&v1.LabelSelector{MatchLabels:map[string]string{pod-template-hash: 854fdc678,test-deployment-static: true,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [] [] []} {[] [] [{test-deployment k8s.gcr.io/e2e-test-images/httpd:2.4.38-2 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc001aca7d0 <nil> ClusterFirst map[] <nil> false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} [] nil default-scheduler [] [] <nil> nil [] <nil> <nil> <nil> map[] [] <nil> nil}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:2,FullyLabeledReplicas:2,ObservedGeneration:2,ReadyReplicas:2,AvailableReplicas:2,Conditions:[]ReplicaSetCondition{},},} Apr 22 14:10:06.464: INFO: pod: "test-deployment-854fdc678-76mfl": &Pod{ObjectMeta:{test-deployment-854fdc678-76mfl test-deployment-854fdc678- deployment-1818 bf72f6f3-38ac-401c-8d6e-79c2bb284c76 16326 0 2022-04-22 14:10:04 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-854fdc678 7cb34fa7-5984-413a-8b3d-788bf74aaf81 0xc000853c47 0xc000853c48}] [] [{kube-controller-manager Update v1 2022-04-22 14:10:04 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7cb34fa7-5984-413a-8b3d-788bf74aaf81\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-22 14:10:06 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.3.80\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-mmx2q,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmx2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-7gf7we-worker-3u7awl,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.6,PodIP:192.168.3.80,StartTime:2022-04-22 14:10:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 14:10:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://98a268bf03937a17aba5ece11bac65e74c068437074adbe588cf1a24b037526d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.3.80,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Apr 22 14:10:06.464: INFO: pod: "test-deployment-854fdc678-xlx25": &Pod{ObjectMeta:{test-deployment-854fdc678-xlx25 test-deployment-854fdc678- deployment-1818 263de754-36a8-4ad9-96e7-edc7e622c9db 16290 0 2022-04-22 14:10:03 +0000 UTC <nil> <nil> map[pod-template-hash:854fdc678 test-deployment-static:true] map[] [{apps/v1 ReplicaSet test-deployment-854fdc678 7cb34fa7-5984-413a-8b3d-788bf74aaf81 0xc000853e37 0xc000853e38}] [] [{kube-controller-manager Update v1 2022-04-22 14:10:03 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:pod-template-hash":{},"f:test-deployment-static":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"7cb34fa7-5984-413a-8b3d-788bf74aaf81\"}":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"test-deployment\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}} } {kubelet Update v1 2022-04-22 14:10:04 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:phase":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"192.168.2.133\"}":{".":{},"f:ip":{}}},"f:startTime":{}}} status}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-zkdtx,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:test-deployment,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkdtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*1,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,OS:nil,},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2022-04-22 14:10:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.7,PodIP:192.168.2.133,StartTime:2022-04-22 14:10:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:test-deployment,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2022-04-22 14:10:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-2,ImageID:k8s.gcr.io/e2e-test-images/httpd@sha256:1b9d1b2f36cb2dbee1960e82a9344aeb11bd4c4c03abf5e1853e0559c23855e3,ContainerID:containerd://8ba2da878855676a99e23633b3bc0cb6e06ba9fe9ff2ce967cc6b2fdb2b3dcb0,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:192.168.2.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:06.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "deployment-1818" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":-1,"completed":11,"skipped":206,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:10:06.493: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1411 �[1mSTEP�[0m: creating an pod Apr 22 14:10:06.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-101 run logs-generator --image=k8s.gcr.io/e2e-test-images/agnhost:2.33 --restart=Never --pod-running-timeout=2m0s -- logs-generator --log-lines-total 100 --run-duration 20s' Apr 22 14:10:06.617: INFO: stderr: "" Apr 22 14:10:06.617: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for log generator to start. Apr 22 14:10:06.617: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Apr 22 14:10:06.617: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-101" to be "running and ready, or succeeded" Apr 22 14:10:06.623: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 5.387044ms Apr 22 14:10:08.629: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 2.011733452s Apr 22 14:10:08.629: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Apr 22 14:10:08.629: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] �[1mSTEP�[0m: checking for a matching strings Apr 22 14:10:08.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-101 logs logs-generator logs-generator' Apr 22 14:10:08.721: INFO: stderr: "" Apr 22 14:10:08.721: INFO: stdout: "I0422 14:10:07.289804 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/6tg 229\nI0422 14:10:07.489960 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/mfbb 201\nI0422 14:10:07.690099 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/wntq 271\nI0422 14:10:07.890213 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/qgbd 353\nI0422 14:10:08.090531 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/vdp 293\nI0422 14:10:08.289878 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/mgh 476\nI0422 14:10:08.490276 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/r9wt 292\nI0422 14:10:08.690695 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/pr8 467\n" �[1mSTEP�[0m: limiting log lines Apr 22 14:10:08.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-101 logs logs-generator logs-generator --tail=1' Apr 22 14:10:08.798: INFO: stderr: "" Apr 22 14:10:08.798: INFO: stdout: "I0422 14:10:08.690695 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/pr8 467\n" Apr 22 14:10:08.798: INFO: got output "I0422 14:10:08.690695 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/pr8 467\n" �[1mSTEP�[0m: limiting log bytes Apr 22 14:10:08.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-101 logs logs-generator logs-generator --limit-bytes=1' Apr 22 14:10:08.876: INFO: stderr: "" Apr 22 14:10:08.876: INFO: stdout: "I" Apr 22 14:10:08.876: INFO: got output "I" �[1mSTEP�[0m: exposing timestamps Apr 22 14:10:08.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-101 logs logs-generator logs-generator --tail=1 --timestamps' Apr 22 14:10:08.955: INFO: stderr: "" Apr 22 14:10:08.955: INFO: stdout: "2022-04-22T14:10:08.890246542Z I0422 14:10:08.890047 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/qzh 306\n" Apr 22 14:10:08.955: INFO: got output "2022-04-22T14:10:08.890246542Z I0422 14:10:08.890047 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/qzh 306\n" �[1mSTEP�[0m: restricting to a time range Apr 22 14:10:11.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-101 logs logs-generator logs-generator --since=1s' Apr 22 14:10:11.537: INFO: stderr: "" Apr 22 14:10:11.537: INFO: stdout: "I0422 14:10:10.689953 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/jh6 308\nI0422 14:10:10.890332 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/btx 550\nI0422 14:10:11.090808 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/v84 274\nI0422 14:10:11.290218 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/kl2 565\nI0422 14:10:11.494299 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/7mxd 301\n" Apr 22 14:10:11.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-101 logs logs-generator logs-generator --since=24h' Apr 22 14:10:11.647: INFO: stderr: "" Apr 22 14:10:11.647: INFO: stdout: "I0422 14:10:07.289804 1 logs_generator.go:76] 0 POST /api/v1/namespaces/ns/pods/6tg 229\nI0422 14:10:07.489960 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/mfbb 201\nI0422 14:10:07.690099 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/wntq 271\nI0422 14:10:07.890213 1 logs_generator.go:76] 3 POST /api/v1/namespaces/kube-system/pods/qgbd 353\nI0422 14:10:08.090531 1 logs_generator.go:76] 4 GET /api/v1/namespaces/default/pods/vdp 293\nI0422 14:10:08.289878 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/mgh 476\nI0422 14:10:08.490276 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/r9wt 292\nI0422 14:10:08.690695 1 logs_generator.go:76] 7 GET /api/v1/namespaces/kube-system/pods/pr8 467\nI0422 14:10:08.890047 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/qzh 306\nI0422 14:10:09.090530 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/5wb 293\nI0422 14:10:09.289932 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/tqs 514\nI0422 14:10:09.490442 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/6rz5 253\nI0422 14:10:09.690836 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/lvc 261\nI0422 14:10:09.890268 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/ns/pods/xrk7 258\nI0422 14:10:10.090700 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/default/pods/h8s 466\nI0422 14:10:10.290049 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/xwg6 335\nI0422 14:10:10.490555 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/vtx 319\nI0422 14:10:10.689953 1 logs_generator.go:76] 17 POST /api/v1/namespaces/default/pods/jh6 308\nI0422 14:10:10.890332 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/btx 550\nI0422 14:10:11.090808 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/ns/pods/v84 274\nI0422 14:10:11.290218 1 logs_generator.go:76] 20 POST /api/v1/namespaces/default/pods/kl2 565\nI0422 14:10:11.494299 1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/7mxd 301\n" [AfterEach] Kubectl logs /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1416 Apr 22 14:10:11.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=kubectl-101 delete pod logs-generator' Apr 22 14:10:12.702: INFO: stderr: "" Apr 22 14:10:12.702: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:12.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubectl-101" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":-1,"completed":12,"skipped":211,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:10:12.722: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:189 [It] should get a host IP [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating pod Apr 22 14:10:12.757: INFO: The status of Pod pod-hostip-f4d6955c-41d9-4497-bafd-f037e2bb746e is Pending, waiting for it to be Running (with Ready = true) Apr 22 14:10:14.762: INFO: The status of Pod pod-hostip-f4d6955c-41d9-4497-bafd-f037e2bb746e is Running (Ready = true) Apr 22 14:10:14.767: INFO: Pod pod-hostip-f4d6955c-41d9-4497-bafd-f037e2bb746e has hostIP: 172.18.0.7 [AfterEach] [sig-node] Pods /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:14.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-3243" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":-1,"completed":13,"skipped":217,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:10:14.797: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename downward-api �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating a pod to test downward api env vars Apr 22 14:10:14.825: INFO: Waiting up to 5m0s for pod "downward-api-c86b2622-c0d0-4161-adfc-a7e848af4062" in namespace "downward-api-6068" to be "Succeeded or Failed" Apr 22 14:10:14.828: INFO: Pod "downward-api-c86b2622-c0d0-4161-adfc-a7e848af4062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.645841ms Apr 22 14:10:16.832: INFO: Pod "downward-api-c86b2622-c0d0-4161-adfc-a7e848af4062": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006957077s Apr 22 14:10:18.837: INFO: Pod "downward-api-c86b2622-c0d0-4161-adfc-a7e848af4062": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011844883s �[1mSTEP�[0m: Saw pod success Apr 22 14:10:18.837: INFO: Pod "downward-api-c86b2622-c0d0-4161-adfc-a7e848af4062" satisfied condition "Succeeded or Failed" Apr 22 14:10:18.840: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod downward-api-c86b2622-c0d0-4161-adfc-a7e848af4062 container dapi-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 14:10:18.854: INFO: Waiting for pod downward-api-c86b2622-c0d0-4161-adfc-a7e848af4062 to disappear Apr 22 14:10:18.857: INFO: Pod downward-api-c86b2622-c0d0-4161-adfc-a7e848af4062 no longer exists [AfterEach] [sig-node] Downward API /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:18.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "downward-api-6068" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":-1,"completed":14,"skipped":235,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:10:18.898: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:69 [BeforeEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:10:18.918: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename disruption-2 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should list and delete a collection of PodDisruptionBudgets [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: Waiting for the pdb to be processed �[1mSTEP�[0m: listing a collection of PDBs across all namespaces �[1mSTEP�[0m: listing a collection of PDBs in namespace disruption-3574 �[1mSTEP�[0m: deleting a collection of PDBs �[1mSTEP�[0m: Waiting for the PDB collection to be deleted [AfterEach] Listing PodDisruptionBudgets for all namespaces /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:24.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-2-4002" for this suite. [AfterEach] [sig-apps] DisruptionController /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:25.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "disruption-3574" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":-1,"completed":15,"skipped":256,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:10:25.035: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename configmap �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Creating configMap with name configmap-test-volume-map-3b5549f8-5dd0-4713-986f-2aa2036f2889 �[1mSTEP�[0m: Creating a pod to test consume configMaps Apr 22 14:10:25.062: INFO: Waiting up to 5m0s for pod "pod-configmaps-dcb8ae51-4cba-4e95-981c-5d5cad2de70e" in namespace "configmap-9818" to be "Succeeded or Failed" Apr 22 14:10:25.065: INFO: Pod "pod-configmaps-dcb8ae51-4cba-4e95-981c-5d5cad2de70e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.487782ms Apr 22 14:10:27.069: INFO: Pod "pod-configmaps-dcb8ae51-4cba-4e95-981c-5d5cad2de70e": Phase="Running", Reason="", readiness=false. Elapsed: 2.006801242s Apr 22 14:10:29.073: INFO: Pod "pod-configmaps-dcb8ae51-4cba-4e95-981c-5d5cad2de70e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010830738s �[1mSTEP�[0m: Saw pod success Apr 22 14:10:29.073: INFO: Pod "pod-configmaps-dcb8ae51-4cba-4e95-981c-5d5cad2de70e" satisfied condition "Succeeded or Failed" Apr 22 14:10:29.076: INFO: Trying to get logs from node k8s-upgrade-and-conformance-7gf7we-md-0-7868f4c856-k4p2k pod pod-configmaps-dcb8ae51-4cba-4e95-981c-5d5cad2de70e container agnhost-container: <nil> �[1mSTEP�[0m: delete the pod Apr 22 14:10:29.097: INFO: Waiting for pod pod-configmaps-dcb8ae51-4cba-4e95-981c-5d5cad2de70e to disappear Apr 22 14:10:29.100: INFO: Pod pod-configmaps-dcb8ae51-4cba-4e95-981c-5d5cad2de70e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:29.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "configmap-9818" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":-1,"completed":16,"skipped":272,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:10:29.224: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should provide secure master service [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:29.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-4990" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":-1,"completed":17,"skipped":360,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:10:29.262: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Given a Pod with a 'name' label pod-adoption-release is created Apr 22 14:10:29.291: INFO: The status of Pod pod-adoption-release is Pending, waiting for it to be Running (with Ready = true) Apr 22 14:10:31.295: INFO: The status of Pod pod-adoption-release is Running (Ready = true) �[1mSTEP�[0m: When a replicaset with a matching selector is created �[1mSTEP�[0m: Then the orphan pod is adopted �[1mSTEP�[0m: When the matched label of one of its pods change Apr 22 14:10:32.312: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 �[1mSTEP�[0m: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:33.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-3343" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":-1,"completed":18,"skipped":364,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:10:33.344: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename services �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:749 [It] should have session affinity work for NodePort service [LinuxOnly] [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating service in namespace services-9644 �[1mSTEP�[0m: creating service affinity-nodeport in namespace services-9644 �[1mSTEP�[0m: creating replication controller affinity-nodeport in namespace services-9644 I0422 14:10:33.389912 16 runners.go:193] Created replication controller with name: affinity-nodeport, namespace: services-9644, replica count: 3 I0422 14:10:36.441724 16 runners.go:193] affinity-nodeport Pods: 3 out of 3 created, 3 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 22 14:10:36.452: INFO: Creating new exec pod Apr 22 14:10:39.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9644 exec execpod-affinity5q5bc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 affinity-nodeport 80' Apr 22 14:10:39.647: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 affinity-nodeport 80\nConnection to affinity-nodeport 80 port [tcp/http] succeeded!\n" Apr 22 14:10:39.647: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 14:10:39.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9644 exec execpod-affinity5q5bc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 10.135.252.162 80' Apr 22 14:10:39.816: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 10.135.252.162 80\nConnection to 10.135.252.162 80 port [tcp/http] succeeded!\n" Apr 22 14:10:39.816: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 14:10:39.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9644 exec execpod-affinity5q5bc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.6 31532' Apr 22 14:10:39.986: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.6 31532\nConnection to 172.18.0.6 31532 port [tcp/*] succeeded!\n" Apr 22 14:10:39.986: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 14:10:39.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9644 exec execpod-affinity5q5bc -- /bin/sh -x -c echo hostName | nc -v -t -w 2 172.18.0.4 31532' Apr 22 14:10:40.141: INFO: stderr: "+ echo hostName\n+ nc -v -t -w 2 172.18.0.4 31532\nConnection to 172.18.0.4 31532 port [tcp/*] succeeded!\n" Apr 22 14:10:40.141: INFO: stdout: "HTTP/1.1 400 Bad Request\r\nContent-Type: text/plain; charset=utf-8\r\nConnection: close\r\n\r\n400 Bad Request" Apr 22 14:10:40.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig --namespace=services-9644 exec execpod-affinity5q5bc -- /bin/sh -x -c for i in $(seq 0 15); do echo; curl -q -s --connect-timeout 2 http://172.18.0.7:31532/ ; done' Apr 22 14:10:40.377: INFO: stderr: "+ seq 0 15\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n+ echo\n+ curl -q -s --connect-timeout 2 http://172.18.0.7:31532/\n" Apr 22 14:10:40.377: INFO: stdout: "\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv\naffinity-nodeport-cb2dv" Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.377: INFO: Received response from host: affinity-nodeport-cb2dv Apr 22 14:10:40.378: INFO: Cleaning up the exec pod �[1mSTEP�[0m: deleting ReplicationController affinity-nodeport in namespace services-9644, will wait for the garbage collector to delete the pods Apr 22 14:10:40.446: INFO: Deleting ReplicationController affinity-nodeport took: 5.05071ms Apr 22 14:10:40.546: INFO: Terminating ReplicationController affinity-nodeport pods took: 100.474455ms [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:42.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "services-9644" for this suite. [AfterEach] [sig-network] Services /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:753 �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":-1,"completed":19,"skipped":367,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m [BeforeEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:10:42.926: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename replicaset �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [It] should validate Replicaset Status endpoints [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: Create a Replicaset �[1mSTEP�[0m: Verify that the required pods have come up. Apr 22 14:10:42.961: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 22 14:10:47.966: INFO: Pod name sample-pod: Found 1 pods out of 1 �[1mSTEP�[0m: ensuring each pod is running �[1mSTEP�[0m: Getting /status Apr 22 14:10:47.971: INFO: Replicaset test-rs has Conditions: [] �[1mSTEP�[0m: updating the Replicaset Status Apr 22 14:10:47.978: INFO: updatedStatus.Conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusUpdate", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"E2E", Message:"Set from e2e test"}} �[1mSTEP�[0m: watching for the ReplicaSet status to be updated Apr 22 14:10:47.980: INFO: Observed &ReplicaSet event: ADDED Apr 22 14:10:47.981: INFO: Observed &ReplicaSet event: MODIFIED Apr 22 14:10:47.982: INFO: Observed &ReplicaSet event: MODIFIED Apr 22 14:10:47.982: INFO: Observed &ReplicaSet event: MODIFIED Apr 22 14:10:47.982: INFO: Found replicaset test-rs in namespace replicaset-4079 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: [{StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test}] Apr 22 14:10:47.982: INFO: Replicaset test-rs has an updated status �[1mSTEP�[0m: patching the Replicaset Status Apr 22 14:10:47.982: INFO: Patch payload: {"status":{"conditions":[{"type":"StatusPatched","status":"True"}]}} Apr 22 14:10:47.987: INFO: Patched status conditions: []v1.ReplicaSetCondition{v1.ReplicaSetCondition{Type:"StatusPatched", Status:"True", LastTransitionTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Reason:"", Message:""}} �[1mSTEP�[0m: watching for the Replicaset status to be patched Apr 22 14:10:47.990: INFO: Observed &ReplicaSet event: ADDED Apr 22 14:10:47.990: INFO: Observed &ReplicaSet event: MODIFIED Apr 22 14:10:47.990: INFO: Observed &ReplicaSet event: MODIFIED Apr 22 14:10:47.990: INFO: Observed &ReplicaSet event: MODIFIED Apr 22 14:10:47.990: INFO: Observed replicaset test-rs in namespace replicaset-4079 with annotations: map[] & Conditions: {StatusUpdate True 0001-01-01 00:00:00 +0000 UTC E2E Set from e2e test} Apr 22 14:10:47.990: INFO: Observed &ReplicaSet event: MODIFIED Apr 22 14:10:47.990: INFO: Found replicaset test-rs in namespace replicaset-4079 with labels: map[name:sample-pod pod:httpd] annotations: map[] & Conditions: {StatusPatched True 0001-01-01 00:00:00 +0000 UTC } Apr 22 14:10:47.990: INFO: Replicaset test-rs has a patched status [AfterEach] [sig-apps] ReplicaSet /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186 Apr 22 14:10:47.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "replicaset-4079" for this suite. �[32m•�[0m �[90m------------------------------�[0m {"msg":"PASSED [sig-apps] ReplicaSet should validate Replicaset Status endpoints [Conformance]","total":-1,"completed":20,"skipped":402,"failed":4,"failures":["[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","[sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]"]} �[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m�[36mS�[0m �[90m------------------------------�[0m {"msg":"FAILED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":-1,"completed":39,"skipped":826,"failed":3,"failures":["[sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","[sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]"]} [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185 �[1mSTEP�[0m: Creating a kubernetes client Apr 22 14:03:43.684: INFO: >>> kubeConfig: /tmp/kubeconfig �[1mSTEP�[0m: Building a namespace api object, basename kubectl �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace �[1mSTEP�[0m: Waiting for kube-root-ca.crt to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:244 [BeforeEach] Update Demo /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:296 [It] should create and stop a replication controller [Conformance] /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:633 �[1mSTEP�[0m: creating a replication controller Apr 22 14:03:43.708: INFO: R