error during ../vertical-pod-autoscaler/hack/run-e2e.sh actuation: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation applies recommendations only on restart when update mode is Initial
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation by default does not evict pods in a 1-Pod CronJob
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation by default does not evict pods in a 1-Pod Deployment
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation by default does not evict pods in a 1-Pod Job
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation by default does not evict pods in a 1-Pod ReplicaSet
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation by default does not evict pods in a 1-Pod ReplicationController
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation by default does not evict pods in a 1-Pod StatefulSet
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation does not act on injected sidecars
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation evicts pods in a multiple-replica CronJob
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation evicts pods in a multiple-replica Deployment
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation evicts pods in a multiple-replica Job
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation evicts pods in a multiple-replica ReplicaSet
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation evicts pods in a multiple-replica ReplicationController
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation evicts pods in a multiple-replica StatefulSet
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation never applies recommendations when update mode is Off
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation observes container max in LimitRange
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation observes container min in LimitRange
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation observes pod disruption budget
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation observes pod max in LimitRange
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation observes pod min in LimitRange
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation stops when pods get pending
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation when configured, evicts pods in a 1-Pod CronJob
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation when configured, evicts pods in a 1-Pod Deployment
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation when configured, evicts pods in a 1-Pod Job
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation when configured, evicts pods in a 1-Pod ReplicaSet
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation when configured, evicts pods in a 1-Pod ReplicationController
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [actuation] [v1] Actuation when configured, evicts pods in a 1-Pod StatefulSet
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedBeforeSuite]
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest diffResources
kubetest list nodes
kubetest listResources After
kubetest listResources Before
kubetest listResources Down
kubetest listResources Up
kubetest test setup
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller accepts valid and rejects invalid VPA object
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller caps request according to container max limit set in LimitRange
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller caps request according to pod max limit set in LimitRange
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller caps request to max set in VPA
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller doesn't block patches
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller keeps limits equal to request
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller keeps limits to request ratio constant
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller keeps limits unchanged when container controlled values is requests only
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller leaves users request when no recommendation
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller passes empty request when no recommendation and no user-specified request
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller raises request according to container min limit set in LimitRange
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller raises request according to pod min limit set in LimitRange
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller raises request to min set in VPA
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller starts pod with default request when one container has a recommendation and one other one doesn't when a limit range applies
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller starts pod with recommendation when one container has a recommendation and one other one doesn't
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller starts pods with default request when recommendation includes an extra container when a limit range applies
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller starts pods with new recommended request
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller starts pods with new recommended request when recommendation includes an extra container
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller starts pods with old recommended request when recommendation has only a container that doesn't match
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [admission-controller] [v1] Admission-controller starts pods with old recommended request when recommendation has only a container that doesn't match when a limit range applies
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [full-vpa] [v1] OOMing pods under VPA have memory requests growing with OOMs
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [full-vpa] [v1] Pods under VPA have cpu requests growing with usage
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [full-vpa] [v1] Pods under VPA have memory requests growing with usage
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [full-vpa] [v1] Pods under VPA with default recommender explicitly configured have cpu requests growing with usage
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [full-vpa] [v1] Pods under VPA with non-recognized recommender explicitly configured deployment not updated by non-recognized recommender
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [recommender] [v1] Checkpoints with missing VPA objects are garbage collected
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [recommender] [v1] VPA CRD object doesn't drop lower/upper after recommender's restart
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [recommender] [v1] VPA CRD object only containers not-opted-out get recommendations
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [recommender] [v1] VPA CRD object respects max allowed recommendation
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [recommender] [v1] VPA CRD object respects min allowed recommendation
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [recommender] [v1] VPA CRD object serves recommendation
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [recommender] [v1] VPA CRD object serves recommendation for CronJob
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [recommender] [v1] VPA CRD object with no containers opted out all containers get recommendations
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [updater] [v1] Updater doesn't evict pods when Admission Controller status unavailable
Kubernetes e2e suite [It] [sig-autoscaling] [VPA] [updater] [v1] Updater evicts pods when Admission Controller status available
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: sctp [LinuxOnly][Feature:SCTPConnectivity][Disruptive]
Kubernetes e2e suite [It] [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]
Kubernetes e2e suite [It] [sig-node] ConfigMap should update ConfigMap successfully
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop https hook properly [MinimumKubeletVersion:1.23] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test on terminated container should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should be able to pull image [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default arguments (container cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command (container entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide container's limits.hugepages-<pagesize> and requests.hugepages-<pagesize> as env vars
Kubernetes e2e suite [It] [sig-node] Downward API [Serial] [Disruptive] [NodeFeature:DownwardAPIHugePages] Downward API tests for hugepages should provide default limits.hugepages-<pagesize> from node allocatable
Kubernetes e2e suite [It] [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Ephemeral Containers [NodeConformance] will start an ephemeral container in an existing pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [It] [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Kubelet when scheduling an agnhost Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Lease lease API should be available [Conformance]
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease should have OwnerReferences set
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [It] [sig-node] NodeLease NodeLease the kubelet should report node status infrequently
Kubernetes e2e suite [It] [sig-node] PodOSRejection [NodeConformance] Kubelet should reject pod when the node OS doesn't match pod's OS
Kubernetes e2e suite [It] [sig-node] PodTemplates should delete a collection of pod templates [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should replace a pod template [Conformance]
Kubernetes e2e suite [It] [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should delete a collection of pods [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should patch a pod status [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support pod readiness gates [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted by liveness probe because startup probe delays it
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be ready immediately after startupProbe succeeds
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted by liveness probe after startup probe enables it
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted startup probe fails
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a GRPC liveness probe [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a failing exec liveness probe that took longer than the timeout
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [It] [sig-node] Probing container should be restarted with an exec liveness probe with timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false and disable liveness probes while pod is in progress of terminating
Kubernetes e2e suite [It] [sig-node] Probing container should mark readiness on pods to false while pod is in progress of terminating when a pod has a readiness probe
Kubernetes e2e suite [It] [sig-node] Probing container should not be ready with an exec readiness probe timeout [MinimumKubeletVersion:1.20] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when LivenessProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container should override timeoutGracePeriodSeconds when StartupProbe field is set [Feature:ProbeTerminationGracePeriod]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should support RuntimeClasses API operations [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass and initialize its Overhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] RuntimeClass should schedule a Pod requesting a RuntimeClass without PodOverhead [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [It] [sig-node] Secrets should patch a secret [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must create the user namespace if set to false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers must not create the user namespace if set to true [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should mount all volumes with proper permissions with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with HostUsers should set FSGroup to user inside the container with hostUsers=false [LinuxOnly] [Feature:UserNamespacesStatelessPodsSupport]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as privileged when true [LinuxOnly] [NodeFeature:HostAccess]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should not launch unsafe, but not explicitly enabled sysctls on the node [MinimumKubeletVersion:1.21]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
Kubernetes e2e suite [It] [sig-node] Sysctls [LinuxOnly] [NodeConformance] should support sysctls with slashes as separator [MinimumKubeletVersion:1.23]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
Kubernetes e2e suite [It] [sig-storage] Downward API [Serial] [Disruptive] [Feature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support memory backed volumes of specified size
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [It] [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
Kubernetes e2e suite [It] [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [It] [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]
Kubernetes e2e suite [It] [sig-storage] Volumes NFSv3 should be mountable for NFSv3
Kubernetes e2e suite [It] [sig-storage] Volumes NFSv4 should be mountable for NFSv4
... skipping 398 lines ... W0326 09:25:17.989] Looking for address 'bootstrap-e2e-master-ip' W0326 09:25:17.989] Using master: bootstrap-e2e-master (external IP: 35.247.71.114; internal IP: (not set)) I0326 09:25:18.090] Group is stable I0326 09:25:18.090] Waiting up to 300 seconds for cluster initialization. I0326 09:25:23.017] I0326 09:25:23.017] This will continually check to see if the API for kubernetes is reachable. I0326 09:25:23.017] This may time out if there was some uncaught error during start up. I0326 09:25:23.017] I0326 09:25:55.840] .........Kubernetes cluster created. I0326 09:25:55.970] Cluster "k8s-gce-1-5_bootstrap-e2e" set. I0326 09:25:56.098] User "k8s-gce-1-5_bootstrap-e2e" set. I0326 09:25:56.227] Context "k8s-gce-1-5_bootstrap-e2e" created. I0326 09:25:56.356] Switched to context "k8s-gce-1-5_bootstrap-e2e". ... skipping 23 lines ... I0326 09:26:35.586] bootstrap-e2e-minion-group-chkn Ready <none> 16s v1.28.0-alpha.0.9+8f15859afc9cfa I0326 09:26:35.586] bootstrap-e2e-minion-group-q94j Ready <none> 13s v1.28.0-alpha.0.9+8f15859afc9cfa I0326 09:26:35.586] bootstrap-e2e-minion-group-wczw Ready <none> 9s v1.28.0-alpha.0.9+8f15859afc9cfa I0326 09:26:35.586] Validate output: W0326 09:26:35.771] Warning: v1 ComponentStatus is deprecated in v1.19+ W0326 09:26:35.778] Done, listing cluster services: I0326 09:26:35.879] NAME STATUS MESSAGE ERROR I0326 09:26:36.037] etcd-0 Healthy {"health":"true","reason":""} I0326 09:26:36.038] etcd-1 Healthy {"health":"true","reason":""} I0326 09:26:36.038] controller-manager Healthy ok I0326 09:26:36.038] scheduler Healthy ok I0326 09:26:36.038] [0;32mCluster validation succeeded[0m I0326 09:26:36.038] [0;32mKubernetes control plane[0m is running at [0;33mhttps://35.247.71.114[0m ... skipping 45 lines ... W0326 09:27:02.730] To show all fields in table format, please see the examples in --help. W0326 09:27:02.730] W0326 09:27:02.730] Listed 0 items. W0326 09:27:04.408] Listed 0 items. W0326 09:27:05.628] 2023/03/26 09:27:05 process.go:155: Step './cluster/gce/list-resources.sh' finished in 16.222897855s W0326 09:27:05.628] 2023/03/26 09:27:05 process.go:153: Running: ../vertical-pod-autoscaler/hack/run-e2e.sh actuation W0326 09:27:05.982] Error from server (NotFound): error when deleting "STDIN": customresourcedefinitions.apiextensions.k8s.io "verticalpodautoscalercheckpoints.autoscaling.k8s.io" not found W0326 09:27:06.850] Error from server (NotFound): error when deleting "STDIN": customresourcedefinitions.apiextensions.k8s.io "verticalpodautoscalers.autoscaling.k8s.io" not found W0326 09:27:06.851] Error from server (NotFound): error when deleting "STDIN": clusterroles.rbac.authorization.k8s.io "system:metrics-reader" not found W0326 09:27:07.044] Error from server (NotFound): error when deleting "STDIN": clusterroles.rbac.authorization.k8s.io "system:vpa-actor" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": clusterroles.rbac.authorization.k8s.io "system:vpa-checkpoint-actor" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": clusterroles.rbac.authorization.k8s.io "system:evictioner" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:metrics-reader" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-actor" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-checkpoint-actor" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": clusterroles.rbac.authorization.k8s.io "system:vpa-target-reader" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-target-reader-binding" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-evictioner-binding" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": serviceaccounts "vpa-admission-controller" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": serviceaccounts "vpa-recommender" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": serviceaccounts "vpa-updater" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": clusterroles.rbac.authorization.k8s.io "system:vpa-admission-controller" not found W0326 09:27:07.045] Error from server (NotFound): error when deleting "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-admission-controller" not found W0326 09:27:07.046] Error from server (NotFound): error when deleting "STDIN": clusterroles.rbac.authorization.k8s.io "system:vpa-status-reader" not found W0326 09:27:07.046] Error from server (NotFound): error when deleting "STDIN": clusterrolebindings.rbac.authorization.k8s.io "system:vpa-status-reader-binding" not found W0326 09:27:07.046] Error from server (NotFound): error when deleting "STDIN": deployments.apps "vpa-updater" not found W0326 09:27:07.240] Error from server (NotFound): error when deleting "STDIN": deployments.apps "vpa-recommender" not found I0326 09:27:07.341] - IPProtocol: tcp I0326 09:27:07.341] ports: I0326 09:27:07.341] - 30000-32767 I0326 09:27:07.341] - IPProtocol: udp I0326 09:27:07.341] ports: I0326 09:27:07.341] - 30000-32767 ... skipping 11 lines ... I0326 09:27:07.342] selfLink: https://www.googleapis.com/compute/v1/projects/k8s-gce-1-5/global/firewalls/bootstrap-e2e-minion-nodeports I0326 09:27:07.342] sourceRanges: I0326 09:27:07.342] - 0.0.0.0/0 I0326 09:27:07.342] targetTags: I0326 09:27:07.342] - bootstrap-e2e-minion I0326 09:27:07.342] Deleting VPA Admission Controller certs. W0326 09:27:07.447] Error from server (NotFound): secrets "vpa-tls-certs" not found W0326 09:27:07.523] Warning: deleting cluster-scoped resources, not scoped to the provided namespace I0326 09:27:07.623] Unregistering VPA admission controller webhook W0326 09:27:07.724] Error from server (NotFound): mutatingwebhookconfigurations.admissionregistration.k8s.io "vpa-webhook-config" not found W0326 09:27:07.907] Error from server (NotFound): error when deleting "STDIN": deployments.apps "vpa-admission-controller" not found W0326 09:27:08.192] Error from server (NotFound): error when deleting "STDIN": services "vpa-webhook" not found W0326 09:27:08.192] resource mapping not found for name: "verticalpodautoscalers.autoscaling.k8s.io" namespace: "" from "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" I0326 09:27:09.639] Configuring registry authentication W0326 09:27:10.427] ensure CRDs are installed first W0326 09:27:10.428] resource mapping not found for name: "verticalpodautoscalercheckpoints.autoscaling.k8s.io" namespace: "" from "STDIN": no matches for kind "CustomResourceDefinition" in version "apiextensions.k8s.io/v1beta1" W0326 09:27:10.428] ensure CRDs are installed first W0326 09:27:10.428] Adding credentials for all GCR repositories. ... skipping 847 lines ... I0326 09:58:28.134] Mar 26 09:58:28.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:3, UpdatedReplicas:3, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:3, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 26, 9, 58, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 26, 9, 58, 20, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"hamster-deployment-5966dcf4d9\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 26, 9, 58, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 26, 9, 58, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} I0326 09:58:30.230] [1mSTEP:[0m Setting up a VPA CRD [38;5;243m03/26/23 09:58:30.228[0m I0326 09:58:30.283] Mar 26 09:58:30.228: INFO: >>> kubeConfig: /workspace/.kube/config I0326 09:58:30.289] W0326 09:58:30.283147 42392 warnings.go:70] autoscaling.k8s.io/v1beta2 API is deprecated I0326 09:58:30.332] [1mSTEP:[0m Setting up LimitRange with min limits - CPU: 100m, memory: 0 [38;5;243m03/26/23 09:58:30.284[0m I0326 09:58:30.337] [1mSTEP:[0m Waiting for pods to be evicted, hoping it won't happen, sleep for 3m0s [38;5;243m03/26/23 09:58:30.331[0m I0326 10:01:30.464] Mar 26 10:01:30.462: FAIL: there should be no pod evictions I0326 10:01:30.468] Expected I0326 10:01:30.468] <int>: 1 I0326 10:01:30.468] to equal I0326 10:01:30.469] <int>: 0 I0326 10:01:30.469] I0326 10:01:30.469] Full Stack Trace I0326 10:01:30.469] command-line-arguments.CheckNoPodsEvicted(0x300c160?, 0xc00091b850?) I0326 10:01:30.469] /go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/v1beta2/common.go:501 +0x2e2 I0326 10:01:30.469] command-line-arguments.glob..func1.12() I0326 10:01:30.469] /go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/v1beta2/actuation.go:243 +0x4e5 I0326 10:01:30.470] [1mSTEP:[0m dump namespace information after failure [38;5;243m03/26/23 10:01:30.463[0m I0326 10:01:30.507] [1mSTEP:[0m Destroying namespace "vertical-pod-autoscaling-6963" for this suite. [38;5;243m03/26/23 10:01:30.463[0m I0326 10:01:30.507] [38;5;243m------------------------------[0m I0326 10:01:30.507] [38;5;9m• [FAILED] [189.974 seconds][0m I0326 10:01:30.507] [sig-autoscaling] [VPA] [actuation] [v1beta2] Actuation I0326 10:01:30.508] [38;5;243m/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/v1beta2/common.go:71[0m I0326 10:01:30.508] [38;5;9m[1m[It] observes container min in LimitRange[0m I0326 10:01:30.508] [38;5;243m/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/v1beta2/actuation.go:227[0m I0326 10:01:30.508] I0326 10:01:30.508] [38;5;243mBegin Captured GinkgoWriter Output >>[0m ... skipping 7 lines ... I0326 10:01:30.508] Mar 26 09:58:28.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:3, UpdatedReplicas:3, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:3, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:time.Date(2023, time.March, 26, 9, 58, 20, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 26, 9, 58, 20, 0, time.Local), Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"hamster-deployment-5966dcf4d9\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:time.Date(2023, time.March, 26, 9, 58, 25, 0, time.Local), LastTransitionTime:time.Date(2023, time.March, 26, 9, 58, 25, 0, time.Local), Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} I0326 10:01:30.508] [1mSTEP:[0m Setting up a VPA CRD [38;5;243m03/26/23 09:58:30.228[0m I0326 10:01:30.508] Mar 26 09:58:30.228: INFO: >>> kubeConfig: /workspace/.kube/config I0326 10:01:30.508] W0326 09:58:30.283147 42392 warnings.go:70] autoscaling.k8s.io/v1beta2 API is deprecated I0326 10:01:30.508] [1mSTEP:[0m Setting up LimitRange with min limits - CPU: 100m, memory: 0 [38;5;243m03/26/23 09:58:30.284[0m I0326 10:01:30.508] [1mSTEP:[0m Waiting for pods to be evicted, hoping it won't happen, sleep for 3m0s [38;5;243m03/26/23 09:58:30.331[0m I0326 10:01:30.508] Mar 26 10:01:30.462: FAIL: there should be no pod evictions I0326 10:01:30.508] Expected I0326 10:01:30.508] <int>: 1 I0326 10:01:30.509] to equal I0326 10:01:30.509] <int>: 0 I0326 10:01:30.509] I0326 10:01:30.509] Full Stack Trace ... skipping 14 lines ... I0326 10:01:30.636] I0326 10:01:30.636] [1mThere were additional failures detected after the initial failure:[0m I0326 10:01:30.636] [38;5;13m[PANICKED][0m I0326 10:01:30.636] [38;5;13mTest Panicked[0m I0326 10:01:30.636] [38;5;13mIn [1m[DeferCleanup (Each)][0m[38;5;13m at: [1m/usr/local/go/src/runtime/panic.go:260[0m I0326 10:01:30.636] I0326 10:01:30.636] [38;5;13mruntime error: invalid memory address or nil pointer dereference[0m I0326 10:01:30.636] I0326 10:01:30.636] [38;5;13mFull Stack Trace[0m I0326 10:01:30.637] k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo.func1() I0326 10:01:30.637] /go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:274 +0x5c I0326 10:01:30.637] k8s.io/kubernetes/test/e2e/framework.(*Framework).dumpNamespaceInfo(0xc000bac1e0) I0326 10:01:30.637] /go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/vendor/k8s.io/kubernetes/test/e2e/framework/framework.go:271 +0x139 ... skipping 174 lines ... I0326 10:11:09.148] [ReportAfterSuite] Kubernetes e2e JUnit report I0326 10:11:09.148] [38;5;243m/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/vendor/k8s.io/kubernetes/test/e2e/framework/test_context.go:529[0m I0326 10:11:09.148] [38;5;243m------------------------------[0m I0326 10:11:09.149] I0326 10:11:09.149] I0326 10:11:09.150] [38;5;9m[1mSummarizing 1 Failure:[0m I0326 10:11:09.150] [38;5;9m[FAIL][0m [0m[sig-autoscaling] [VPA] [actuation] [v1beta2] Actuation [38;5;243m[38;5;9m[1m[It] observes container min in LimitRange[0m[0m I0326 10:11:09.150] [38;5;243m/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/v1beta2/common.go:501[0m I0326 10:11:09.150] I0326 10:11:09.150] [38;5;9m[1mRan 15 of 283 Specs in 2306.732 seconds[0m I0326 10:11:09.150] [38;5;9m[1mFAIL![0m -- [38;5;10m[1m14 Passed[0m | [38;5;9m[1m1 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m268 Skipped[0m I0326 10:11:09.150] --- FAIL: TestE2E (2306.75s) I0326 10:11:09.150] FAIL I0326 10:11:09.151] FAIL command-line-arguments 2306.826s I0326 10:11:09.177] FAIL I0326 10:11:26.901] Mar 26 10:11:26.901: INFO: The --provider flag is not set. Continuing as if --provider=skeleton had been used. I0326 10:11:26.917] === RUN TestE2E I0326 10:11:26.917] Running Suite: Kubernetes e2e suite - /go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/v1 I0326 10:11:26.918] ============================================================================================== I0326 10:11:26.918] Random Seed: [1m1679825486[0m I0326 10:11:26.918] ... skipping 1144 lines ... I0326 11:12:05.758] [38;5;10m[ReportAfterSuite] PASSED [0.003 seconds][0m I0326 11:12:05.758] [ReportAfterSuite] Kubernetes e2e JUnit report I0326 11:12:05.758] [38;5;243m/go/src/k8s.io/autoscaler/vertical-pod-autoscaler/e2e/vendor/k8s.io/kubernetes/test/e2e/framework/test_context.go:529[0m I0326 11:12:05.758] [38;5;243m------------------------------[0m I0326 11:12:05.758] I0326 11:12:05.758] [38;5;10m[1mRan 27 of 304 Specs in 3638.833 seconds[0m I0326 11:12:05.758] [38;5;10m[1mSUCCESS![0m -- [38;5;10m[1m27 Passed[0m | [38;5;9m[1m0 Failed[0m | [38;5;11m[1m0 Pending[0m | [38;5;14m[1m277 Skipped[0m I0326 11:12:05.758] --- PASS: TestE2E (3638.85s) I0326 11:12:05.758] PASS I0326 11:12:05.759] ok command-line-arguments 3638.995s I0326 11:12:05.854] /go/src/k8s.io/autoscaler/kubernetes I0326 11:12:05.866] v1beta2 test result: 1 I0326 11:12:05.866] Please check v1beta2 "go test" logs! I0326 11:12:05.866] v1 test result: 0 I0326 11:12:05.866] Tests failed I0326 11:12:05.866] Checking for custom logdump instances, if any I0326 11:12:05.870] ---------------------------------------------------------------------------------------------------- I0326 11:12:05.871] k/k version of the log-dump.sh script is deprecated! I0326 11:12:05.871] Please migrate your test job to use test-infra's repo version of log-dump.sh! I0326 11:12:05.871] Migration steps can be found in the readme file. I0326 11:12:05.871] ---------------------------------------------------------------------------------------------------- ... skipping 16 lines ... W0326 11:13:19.044] Specify --start=70407 in the next get-serial-port-output invocation to get only the new output starting from here. W0326 11:13:19.044] scp: /var/log/cloud-controller-manager.log*: No such file or directory W0326 11:13:19.343] scp: /var/log/cluster-autoscaler.log*: No such file or directory W0326 11:13:19.634] scp: /var/log/fluentd.log*: No such file or directory W0326 11:13:19.634] scp: /var/log/kubelet.cov*: No such file or directory W0326 11:13:19.634] scp: /var/log/startupscript.log*: No such file or directory W0326 11:13:19.641] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. I0326 11:13:19.890] Dumping logs from nodes locally to '/workspace/_artifacts' I0326 11:14:24.941] Detecting nodes in the cluster I0326 11:14:24.941] Changing logfiles to be world-readable for download I0326 11:14:25.473] Changing logfiles to be world-readable for download I0326 11:14:25.557] Changing logfiles to be world-readable for download I0326 11:14:29.575] Copying 'kube-proxy.log containers/konnectivity-agent-*.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from bootstrap-e2e-minion-group-q94j ... skipping 6 lines ... W0326 11:14:31.569] W0326 11:14:33.695] Specify --start=138542 in the next get-serial-port-output invocation to get only the new output starting from here. W0326 11:14:33.695] scp: /var/log/fluentd.log*: No such file or directory W0326 11:14:33.696] scp: /var/log/node-problem-detector.log*: No such file or directory W0326 11:14:33.696] scp: /var/log/kubelet.cov*: No such file or directory W0326 11:14:33.700] scp: /var/log/startupscript.log*: No such file or directory W0326 11:14:33.701] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. W0326 11:14:34.098] scp: /var/log/fluentd.log*: No such file or directory W0326 11:14:34.098] scp: /var/log/node-problem-detector.log*: No such file or directory W0326 11:14:34.098] scp: /var/log/kubelet.cov*: No such file or directory W0326 11:14:34.104] scp: /var/log/startupscript.log*: No such file or directory W0326 11:14:34.104] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. W0326 11:14:34.393] scp: /var/log/fluentd.log*: No such file or directory W0326 11:14:34.394] scp: /var/log/node-problem-detector.log*: No such file or directory W0326 11:14:34.394] scp: /var/log/kubelet.cov*: No such file or directory W0326 11:14:34.394] scp: /var/log/startupscript.log*: No such file or directory W0326 11:14:34.398] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1]. W0326 11:14:39.953] INSTANCE_GROUPS=bootstrap-e2e-minion-group I0326 11:14:41.511] Failures for bootstrap-e2e-minion-group (if any): W0326 11:14:43.140] NODE_NAMES=bootstrap-e2e-minion-group-chkn bootstrap-e2e-minion-group-q94j bootstrap-e2e-minion-group-wczw W0326 11:14:43.140] 2023/03/26 11:14:43 process.go:155: Step './cluster/log-dump/log-dump.sh /workspace/_artifacts' finished in 2m37.283350032s W0326 11:14:55.856] 2023/03/26 11:14:43 e2e.go:476: Listing resources... W0326 11:14:55.857] 2023/03/26 11:14:43 process.go:153: Running: ./cluster/gce/list-resources.sh ... skipping 74 lines ... W0326 11:22:48.233] Listed 0 items. W0326 11:22:50.040] Listed 0 items. W0326 11:22:51.303] 2023/03/26 11:22:51 process.go:155: Step './cluster/gce/list-resources.sh' finished in 16.021845011s W0326 11:22:51.305] 2023/03/26 11:22:51 process.go:153: Running: diff -sw -U0 -F^\[.*\]$ /workspace/_artifacts/gcp-resources-before.txt /workspace/_artifacts/gcp-resources-after.txt W0326 11:22:51.307] 2023/03/26 11:22:51 process.go:155: Step 'diff -sw -U0 -F^\[.*\]$ /workspace/_artifacts/gcp-resources-before.txt /workspace/_artifacts/gcp-resources-after.txt' finished in 2.352101ms W0326 11:22:51.309] 2023/03/26 11:22:51 process.go:96: Saved XML output to /workspace/_artifacts/junit_runner.xml. W0326 11:22:51.318] 2023/03/26 11:22:51 main.go:328: Something went wrong: encountered 1 errors: [error during ../vertical-pod-autoscaler/hack/run-e2e.sh actuation: exit status 1] W0326 11:22:51.331] Traceback (most recent call last): W0326 11:22:51.331] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 723, in <module> W0326 11:22:51.333] main(parse_args()) W0326 11:22:51.333] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 569, in main W0326 11:22:51.334] mode.start(runner_args) W0326 11:22:51.334] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 228, in start W0326 11:22:51.334] check_env(env, self.command, *args) W0326 11:22:51.335] File "/workspace/./test-infra/jenkins/../scenarios/kubernetes_e2e.py", line 111, in check_env W0326 11:22:51.335] subprocess.check_call(cmd, env=env) W0326 11:22:51.335] File "/usr/lib/python3.9/subprocess.py", line 373, in check_call W0326 11:22:51.335] raise CalledProcessError(retcode, cmd) W0326 11:22:51.336] subprocess.CalledProcessError: Command '('kubetest', '--dump=/workspace/_artifacts', '--gcp-service-account=/etc/service-account/service-account.json', '--up', '--down', '--provider=gce', '--cluster=bootstrap-e2e', '--gcp-network=bootstrap-e2e', '--check-leaked-resources', '--extract=ci/latest', '--gcp-node-image=gci', '--gcp-zone=us-west1-b', '--test-cmd=../vertical-pod-autoscaler/hack/run-e2e.sh', '--test-cmd-args=actuation', '--timeout=180m')' returned non-zero exit status 1. I0326 11:22:51.368] Done E0326 11:22:51.369] Command failed I0326 11:22:51.369] process 435 exited with code 1 after 121.5m E0326 11:22:51.369] FAIL: ci-kubernetes-e2e-autoscaling-vpa-actuation I0326 11:22:51.371] Call: gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json W0326 11:22:52.286] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com] I0326 11:22:52.450] process 45202 exited with code 0 after 0.0m I0326 11:22:52.450] Call: gcloud config get-value account I0326 11:22:53.339] process 45212 exited with code 0 after 0.0m I0326 11:22:53.340] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com I0326 11:22:53.340] Upload result and artifacts... I0326 11:22:53.340] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-autoscaling-vpa-actuation/1639920323983839232 I0326 11:22:53.341] Call: gsutil ls gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-autoscaling-vpa-actuation/1639920323983839232/artifacts W0326 11:22:54.952] CommandException: One or more URLs matched no objects. E0326 11:22:55.200] Command failed I0326 11:22:55.200] process 45222 exited with code 1 after 0.0m W0326 11:22:55.200] Remote dir gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-autoscaling-vpa-actuation/1639920323983839232/artifacts not exist yet I0326 11:22:55.201] Call: gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-kubernetes-e2e-autoscaling-vpa-actuation/1639920323983839232/artifacts I0326 11:23:04.947] process 45356 exited with code 0 after 0.2m I0326 11:23:04.948] Call: git rev-parse HEAD I0326 11:23:04.954] process 46001 exited with code 0 after 0.0m ... skipping 13 lines ...