PR | jiayingz: Update ppc64le cuda-vector-add 2.0 to be based on Cuda10 base image. |
Result | SUCCESS |
Tests | 0 failed / 632 succeeded |
Started | |
Elapsed | 17m36s |
Revision | |
Builder | gke-prow-containerd-pool-99179761-dxnn |
Refs |
master:805a9e70 73939:d66cd673 |
pod | 657ff3de-2e57-11e9-a65a-0a580a6c0819 |
infra-commit | 49d8112d8 |
job-version | v1.14.0-alpha.2.538+66637af2a928f1 |
pod | 657ff3de-2e57-11e9-a65a-0a580a6c0819 |
repo | k8s.io/kubernetes |
repo-commit | 66637af2a928f11894c1bee86b3bedca1ea75265 |
repos | {u'k8s.io/kubernetes': u'master:805a9e703698d0a8a86f405f861f9e3fd91b29c6,73939:d66cd673775b18cc9c2c7ab61d5bb51b48600d25'} |
revision | v1.14.0-alpha.2.538+66637af2a928f1 |
Deferred TearDown
DumpClusterLogs
Node Tests
TearDown
TearDown Previous
Timeout
Up
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
[k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
[k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
[k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
[k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [LinuxOnly] [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io [NodeConformance]
[k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io [NodeConformance]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message as empty when pod succeeds and TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message from file when pod succeeds and TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message from log output if TerminationMessagePolicy FallbackToLogOnError is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message if TerminationMessagePath is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message if TerminationMessagePath is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message if TerminationMessagePath is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message if TerminationMessagePath is set [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should report termination message if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [LinuxOnly]
[k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
[k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
[k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
[k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
[k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
[k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
[k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
[k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
[k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
[k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
[k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
[k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
[k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
[k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
[k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
[k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
[k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
[k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
[k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
[k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
[k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
[k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
[k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
[k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
[k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
[k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
[k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
[k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
[k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
[k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
[k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
[k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
[k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
[k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
[k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
[k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
[k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
[k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
[k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
[k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
[k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
[k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
[k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
[k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
[k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
[k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
[k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
[k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
[k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
[k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
[k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
[k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
[k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
[k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
[k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
[k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
[k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
[k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
[k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
[k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
[k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
[k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
[k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
[k8s.io] Pods should be updated [NodeConformance] [Conformance]
[k8s.io] Pods should be updated [NodeConformance] [Conformance]
[k8s.io] Pods should be updated [NodeConformance] [Conformance]
[k8s.io] Pods should be updated [NodeConformance] [Conformance]
[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
[k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
[k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
[k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
[k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
[k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
[k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
[k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
[k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
[k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
[k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
[k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
[k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
[k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
[k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
[k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
[k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
[k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
[k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
[k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
[k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
[k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
[k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
[k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
[k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
[k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
[k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
[k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
[k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance]
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance]
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance]
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance]
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance]
[k8s.io] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
[k8s.io] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
[k8s.io] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
[k8s.io] Summary API [NodeConformance] when querying /stats/summary should report resource usage through the stats api
[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
[k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
[k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
[k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
[k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
[k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
[k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
[k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
[k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
[k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
[sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
[sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
[sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
[sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
[sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
[sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
[sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
[sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
[sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
[sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
[sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
[sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
[sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
[sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
[sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
[sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
[sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
[sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
[sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
[sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
[sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
[sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
[sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
[sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
[sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
[sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
[sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
[sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
[sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
[sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
[sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
[sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] HostPath should support r/w [NodeConformance]
[sig-storage] HostPath should support r/w [NodeConformance]
[sig-storage] HostPath should support r/w [NodeConformance]
[sig-storage] HostPath should support r/w [NodeConformance]
[sig-storage] HostPath should support subPath [NodeConformance]
[sig-storage] HostPath should support subPath [NodeConformance]
[sig-storage] HostPath should support subPath [NodeConformance]
[sig-storage] HostPath should support subPath [NodeConformance]
[sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
[sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
[sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
[sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
[sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
[sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
[sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
[sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
[sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
[sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
[sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
[sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
[sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
[sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
[sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
[sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
[sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
[sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
test setup
[k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
[k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
[k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
[k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
[k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
[k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
[k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
[k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
[k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
[k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running without AppArmor should reject a pod with an AppArmor profile
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
[k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
[k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
[k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
[k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
[k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
[k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
[k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
[k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
[k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
[k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
[k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
[k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
[k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
[k8s.io] Device Plugin [Feature:DevicePlugin][NodeFeature:DevicePlugin][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
[k8s.io] Device Plugin [Feature:DevicePlugin][NodeFeature:DevicePlugin][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
[k8s.io] Device Plugin [Feature:DevicePlugin][NodeFeature:DevicePlugin][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
[k8s.io] Device Plugin [Feature:DevicePlugin][NodeFeature:DevicePlugin][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
[k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
[k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
[k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
[k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
[k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
[k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
[k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
[k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
[k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
[k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
[k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
[k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
[k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
[k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
[k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
[k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
[k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
[k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
[k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
[k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
[k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
[k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
[k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
[k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
[k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
[k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
[k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
[k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
[k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
[k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] set's up the node and runs the test
[k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] set's up the node and runs the test
[k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] set's up the node and runs the test
[k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] set's up the node and runs the test
[k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
[k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
[k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
[k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
[k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
[k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
[k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
[k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
[k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
[k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
[k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
[k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
[k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
[k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
[k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
[k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
[k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
[k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
[k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
[k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
[k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
[k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
[k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
[k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
[k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
[k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
[k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
[k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
[k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
[k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
[k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
[k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
[k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
[k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
[k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
[k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
[k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
[k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
[k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
[k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
[k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
[k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [NodeFeature:HostAccess]
[k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [NodeFeature:HostAccess]
[k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [NodeFeature:HostAccess]
[k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
[k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
[k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
[k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
[k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
[k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
[k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
[k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
[k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
[k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
[k8s.io] Sysctls [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
[k8s.io] Sysctls [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
[k8s.io] Sysctls [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
[k8s.io] Sysctls [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
[k8s.io] Sysctls [NodeFeature:Sysctls] should reject invalid sysctls
[k8s.io] Sysctls [NodeFeature:Sysctls] should reject invalid sysctls
[k8s.io] Sysctls [NodeFeature:Sysctls] should reject invalid sysctls
[k8s.io] Sysctls [NodeFeature:Sysctls] should reject invalid sysctls
[k8s.io] Sysctls [NodeFeature:Sysctls] should support sysctls
[k8s.io] Sysctls [NodeFeature:Sysctls] should support sysctls
[k8s.io] Sysctls [NodeFeature:Sysctls] should support sysctls
[k8s.io] Sysctls [NodeFeature:Sysctls] should support sysctls
[k8s.io] Sysctls [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
[k8s.io] Sysctls [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
[k8s.io] Sysctls [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
[k8s.io] Sysctls [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
[k8s.io] Variable Expansion should allow substituting values in a volume subpath [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion]
[k8s.io] Variable Expansion should allow substituting values in a volume subpath [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion]
[k8s.io] Variable Expansion should allow substituting values in a volume subpath [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion]
[k8s.io] Variable Expansion should allow substituting values in a volume subpath [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion]
[k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion][Slow]
[k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion][Slow]
[k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion][Slow]
[k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion][Slow]
[k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion][Slow]
[k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion][Slow]
[k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion][Slow]
[k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [Feature:VolumeSubpathEnvExpansion][NodeAlphaFeature:VolumeSubpathEnvExpansion][Slow]
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
[k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
[sig-api-machinery] Secrets should fail to create secret in volume due to empty secret key
[sig-api-machinery] Secrets should fail to create secret in volume due to empty secret key
[sig-api-machinery] Secrets should fail to create secret in volume due to empty secret key
[sig-api-machinery] Secrets should fail to create secret in volume due to empty secret key
[sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
[sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
[sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
[sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
[sig-node] ConfigMap should fail to create configMap in volume due to empty configmap key
[sig-node] ConfigMap should fail to create configMap in volume due to empty configmap key
[sig-node] ConfigMap should fail to create configMap in volume due to empty configmap key
[sig-node] ConfigMap should fail to create configMap in volume due to empty configmap key
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
[sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
[sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
[sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
[sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
[sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
[sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads run each pre-defined workload
[sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads run each pre-defined workload
[sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads run each pre-defined workload
[sig-node] Node Performance Testing [Serial] [Slow] Run node performance testing with pre-defined workloads run each pre-defined workload
[sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
[sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
[sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
[sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
[sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
[sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
[sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
[sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
[sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
[sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
[sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
[sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
[sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeFeature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeFeature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeFeature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeFeature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] Downward API volume should provide podname as non-root with fsgroup [NodeFeature:FSGroup]
[sig-storage] Downward API volume should provide podname as non-root with fsgroup [NodeFeature:FSGroup]
[sig-storage] Downward API volume should provide podname as non-root with fsgroup [NodeFeature:FSGroup]
[sig-storage] Downward API volume should provide podname as non-root with fsgroup [NodeFeature:FSGroup]
[sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [NodeFeature:FSGroup]
[sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [NodeFeature:FSGroup]
[sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [NodeFeature:FSGroup]
[sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [NodeFeature:FSGroup]
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
[sig-storage] EmptyDir volumes when FSGroup is specified [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
[sig-storage] GCP Volumes GlusterFS should be mountable
[sig-storage] GCP Volumes GlusterFS should be mountable
[sig-storage] GCP Volumes GlusterFS should be mountable
[sig-storage] GCP Volumes GlusterFS should be mountable
[sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
[sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
[sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
[sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
[sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
[sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
[sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
[sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
[sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
[sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
[sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
[sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
[sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
[sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
[sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
[sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeFeature:FSGroup]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeFeature:FSGroup]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeFeature:FSGroup]
[sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [NodeFeature:FSGroup]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [NodeFeature:FSGroup]
[sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [NodeFeature:FSGroup]
[sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [NodeFeature:FSGroup]
[sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [NodeFeature:FSGroup]
[sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [NodeFeature:FSGroup]
[sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [NodeFeature:FSGroup]
[sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [NodeFeature:FSGroup]
[sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [NodeFeature:FSGroup]
[sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [NodeFeature:FSGroup]
[sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
[sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
[sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
[sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
[sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
[sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
[sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
[sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
[sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
[sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
[sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
[sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
[sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
[sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
[sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
[sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]