This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 712 failed / 75 succeeded
Started2020-02-05 19:45
Elapsed1h32m
Revision
Buildergke-prow-default-pool-cf4891d4-r4zq
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/097ca166-1516-4392-89df-5f3548cab5f6/targets/test'}}
podcb0493e7-484f-11ea-996d-0a03f2419e8d
resultstorehttps://source.cloud.google.com/results/invocations/097ca166-1516-4392-89df-5f3548cab5f6/targets/test
infra-commit7454984be
job-versionv1.16.7-beta.0.23+0a70c2fa6d4642
master_os_image
node_os_imagecos-77-12371-89-0
podcb0493e7-484f-11ea-996d-0a03f2419e8d
revisionv1.16.7-beta.0.23+0a70c2fa6d4642

Test Failures


Cluster downgrade hpa-upgrade 11m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\sdowngrade\shpa\-upgrade$'
Unexpected error:
    <*errors.StatusError | 0xc0031ee140>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/hpa-upgrade-8061/replicationcontrollers/res-cons-upgrade\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get replicationcontrollers res-cons-upgrade)",
            Reason: "InternalError",
            Details: {
                Name: "res-cons-upgrade",
                Group: "",
                Kind: "replicationcontrollers",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/hpa-upgrade-8061/replicationcontrollers/res-cons-upgrade\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/hpa-upgrade-8061/replicationcontrollers/res-cons-upgrade\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get replicationcontrollers res-cons-upgrade)
occurred

k8s.io/kubernetes/test/e2e/common.(*ResourceConsumer).GetReplicas(0xc003334a50, 0xc0033dbd40)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:337 +0x66c
k8s.io/kubernetes/test/e2e/common.(*ResourceConsumer).WaitForReplicas.func1(0xc0025f9b78, 0xc0025f9b60, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:369 +0x37
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.WaitFor(0xc0031887e0, 0xc0028f6360, 0xc0033dbd40, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:426 +0x137
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollInternal(0xc0031887e0, 0xc0028f6360, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:312 +0x8a
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.pollImmediateInternal(0xc0031887e0, 0xc0028f6360, 0xc0031887e0, 0xc0028f6360)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:337 +0x70
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.PollImmediate(0x4a817c800, 0xd18c2e2800, 0xc0028f6360, 0x1, 0x27)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:326 +0x4d
k8s.io/kubernetes/test/e2e/common.(*ResourceConsumer).WaitForReplicas(0xc003334a50, 0x3, 0xd18c2e2800)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/autoscaling_utils.go:368 +0x81
k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).test(0x7ea44e0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:90 +0x3e3
k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).Setup(0x7ea44e0, 0xc000559540)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:60 +0x1f7
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc002998c40, 0xc00299b880)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:395 +0x2c1
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc00299b880, 0xc002974c00)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] 15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\spoststart\sexec\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc001c1ff00>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "rolebindings.rbac.authorization.k8s.io \"container-lifecycle-hook-7677--e2e-test-privileged-psp\" is forbidden: user \"pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com\" (groups=[\"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"extensions\"], Resources:[\"podsecuritypolicies\"], ResourceNames:[\"e2e-test-privileged-psp\"], Verbs:[\"use\"]}",
                    Reason: "Forbidden",
                    Details: {
                        Name: "container-lifecycle-hook-7677--e2e-test-privileged-psp",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: nil,
                        RetryAfterSeconds: 0,
                    },
                    Code: 403,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"container-lifecycle-hook-7677\" for [{ServiceAccount  default container-lifecycle-hook-7677}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "container-lifecycle-hook-7677" for [{ServiceAccount  default container-lifecycle-hook-7677}]: rolebindings.rbac.authorization.k8s.io "container-lifecycle-hook-7677--e2e-test-privileged-psp" is forbidden: user "pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com" (groups=["system:authenticated"]) is attempting to grant RBAC permissions not currently held:
    {APIGroups:["extensions"], Resources:["podsecuritypolicies"], ResourceNames:["e2e-test-privileged-psp"], Verbs:["use"]}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] 15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\spoststart\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc0029a3560>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "rolebindings.rbac.authorization.k8s.io \"container-lifecycle-hook-5497--e2e-test-privileged-psp\" is forbidden: user \"pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com\" (groups=[\"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"extensions\"], Resources:[\"podsecuritypolicies\"], ResourceNames:[\"e2e-test-privileged-psp\"], Verbs:[\"use\"]}",
                    Reason: "Forbidden",
                    Details: {
                        Name: "container-lifecycle-hook-5497--e2e-test-privileged-psp",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: nil,
                        RetryAfterSeconds: 0,
                    },
                    Code: 403,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"container-lifecycle-hook-5497\" for [{ServiceAccount  default container-lifecycle-hook-5497}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "container-lifecycle-hook-5497" for [{ServiceAccount  default container-lifecycle-hook-5497}]: rolebindings.rbac.authorization.k8s.io "container-lifecycle-hook-5497--e2e-test-privileged-psp" is forbidden: user "pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com" (groups=["system:authenticated"]) is attempting to grant RBAC permissions not currently held:
    {APIGroups:["extensions"], Resources:["podsecuritypolicies"], ResourceNames:["e2e-test-privileged-psp"], Verbs:["use"]}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] 44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\sexec\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc0002afe00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-lifecycle-hook-411/pods\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-411/pods\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-411/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post pods)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:80
				
				Click to see stdout/stderrfrom junit_skew04.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] 13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0024b4780>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-lifecycle-hook-1224/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-1224/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-1224/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] 24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sas\sempty\swhen\spod\ssucceeds\sand\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:42:30.920: Couldn't delete ns: "container-runtime-1047": an error on the server ("Internal Server Error: \"/apis/apps/v1/namespaces/container-runtime-1047/daemonsets\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1/namespaces/container-runtime-1047/daemonsets\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00222ff80), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] 44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sfrom\sfile\swhen\spod\ssucceeds\sand\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:59:15.367: Couldn't delete ns: "container-runtime-7659": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-7659/podtemplates\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-7659/podtemplates\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0033c67e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] 14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sfrom\slog\soutput\sif\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc00179ca20>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-runtime-5848/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-runtime-5848/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"container-runtime-5848\" for [{ServiceAccount  default container-runtime-5848}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "container-runtime-5848" for [{ServiceAccount  default container-runtime-5848}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-runtime-5848/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] 25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sif\sTerminationMessagePath\sis\sset\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:164
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc0015090e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-2941/pods\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-2941/pods\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-2941/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post pods)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:80
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] 12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sif\sTerminationMessagePath\sis\sset\sas\snon\-root\suser\sand\sat\sa\snon\-default\spath\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc001964b00>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "rolebindings.rbac.authorization.k8s.io \"container-runtime-4757--e2e-test-privileged-psp\" is forbidden: user \"pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com\" (groups=[\"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"extensions\"], Resources:[\"podsecuritypolicies\"], ResourceNames:[\"e2e-test-privileged-psp\"], Verbs:[\"use\"]}",
                    Reason: "Forbidden",
                    Details: {
                        Name: "container-runtime-4757--e2e-test-privileged-psp",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: nil,
                        RetryAfterSeconds: 0,
                    },
                    Code: 403,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"container-runtime-4757\" for [{ServiceAccount  default container-runtime-4757}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "container-runtime-4757" for [{ServiceAccount  default container-runtime-4757}]: rolebindings.rbac.authorization.k8s.io "container-runtime-4757--e2e-test-privileged-psp" is forbidden: user "pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com" (groups=["system:authenticated"]) is attempting to grant RBAC permissions not currently held:
    {APIGroups:["extensions"], Resources:["podsecuritypolicies"], ResourceNames:["e2e-test-privileged-psp"], Verbs:["use"]}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [LinuxOnly] [NodeConformance] 31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\sbe\sable\sto\spull\sfrom\sprivate\sregistry\swith\ssecret\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001c723c0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-9596/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-9596/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-9596/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [LinuxOnly] [NodeConformance] 23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\sbe\sable\sto\spull\simage\sfrom\sdocker\shub\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001685900>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-6515/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-6515/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-6515/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [LinuxOnly] [NodeConformance] 26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\sbe\sable\sto\spull\simage\sfrom\sgcr\.io\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0003997c0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-8985/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-8985/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-8985/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance] 23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\snot\sbe\sable\sto\spull\sfrom\sprivate\sregistry\swithout\ssecret\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001ccd4a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-6739/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-6739/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-6739/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew04.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance] 59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\snot\sbe\sable\sto\spull\simage\sfrom\sinvalid\sregistry\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:45:10.323: Couldn't delete ns: "container-runtime-5592": an error on the server ("Internal Server Error: \"/apis/networking.k8s.io/v1/namespaces/container-runtime-5592/networkpolicies\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/networking.k8s.io/v1/namespaces/container-runtime-5592/networkpolicies\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002e03440), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] 1m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\sstarting\sa\scontainer\sthat\sexits\sshould\srun\swith\sthe\sexpected\sstatus\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Expected success, but got an error:
    <*errors.StatusError | 0xc00022c140>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-397/pods/terminate-cmd-rpa30bfd944-2340-4f68-bc05-f92b486fc141\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete pods terminate-cmd-rpa30bfd944-2340-4f68-bc05-f92b486fc141)",
            Reason: "InternalError",
            Details: {
                Name: "terminate-cmd-rpa30bfd944-2340-4f68-bc05-f92b486fc141",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-397/pods/terminate-cmd-rpa30bfd944-2340-4f68-bc05-f92b486fc141\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-397/pods/terminate-cmd-rpa30bfd944-2340-4f68-bc05-f92b486fc141\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete pods terminate-cmd-rpa30bfd944-2340-4f68-bc05-f92b486fc141)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:123
				
				Click to see stdout/stderrfrom junit_skew10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] 9.02s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\sbe\sable\sto\soverride\sthe\simage\'s\sdefault\scommand\s\(docker\sentrypoint\)\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0031acc80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/containers-5692/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/containers-5692/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/containers-5692/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] 20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\sbe\sable\sto\soverride\sthe\simage\'s\sdefault\scommand\sand\sarguments\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:51:11.911: Couldn't delete ns: "containers-3435": an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1/namespaces/containers-3435/roles\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1/namespaces/containers-3435/roles\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002182960), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] 10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\suse\sthe\simage\sdefaults\sif\scommand\sand\sargs\sare\sblank\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001bee820>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/containers-4429/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/containers-4429/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/containers-4429/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] 13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\sinvoke\sinit\scontainers\son\sa\sRestartAlways\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:57:15.996: Couldn't delete ns: "init-container-9909": an error on the server ("Internal Server Error: \"/api/v1/namespaces/init-container-9909\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces init-container-9909) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/init-container-9909\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces init-container-9909)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc000e53380), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew05.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\snot\sstart\sapp\scontainers\sand\sfail\sthe\spod\sif\sinit\scontainers\sfail\son\sa\sRestartNever\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc002d64000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/init-container-4390/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/init-container-4390/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/init-container-4390/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew11.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] 1m50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\snot\sstart\sapp\scontainers\sif\sinit\scontainers\sfail\son\sa\sRestartAlways\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 21:00:17.463: Couldn't delete ns: "init-container-4334": an error on the server ("Internal Server Error: \"/api/v1/namespaces/init-container-4334/events\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/init-container-4334/events\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0018bb080), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] 1m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\sPod\swith\shostAliases\sshould\swrite\sentries\sto\s\/etc\/hosts\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:46:45.968: Couldn't delete ns: "kubelet-test-7108": an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-test-7108/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-test-7108/pods\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00234a540), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sin\sa\spod\sshould\sprint\sthe\soutput\sto\slogs\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc000dd5ae0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-test-2231/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/kubelet-test-2231/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-test-2231/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] 17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\sbe\spossible\sto\sdelete\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001be1b80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-test-7203/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/kubelet-test-7203/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-test-7203/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] 45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\shave\san\sterminated\sreason\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:55:40.717: Couldn't delete ns: "kubelet-test-6350": an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-test-6350\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces kubelet-test-6350) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-test-6350\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces kubelet-test-6350)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002ad2120), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] 21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sread\sonly\sbusybox\scontainer\sshould\snot\swrite\sto\sroot\sfilesystem\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00714ac80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-test-9836/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/kubelet-test-9836/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-test-9836/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] 11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubeletManagedEtcHosts\sshould\stest\skubelet\smanaged\s\/etc\/hosts\sfile\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0021694a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-kubelet-etc-hosts-3788/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-kubelet-etc-hosts-3788/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-kubelet-etc-hosts-3788/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace 18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sNodeLease\swhen\sthe\sNodeLease\sfeature\sis\senabled\sthe\skubelet\sshould\screate\sand\supdate\sa\slease\sin\sthe\skube\-node\-lease\snamespace$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 21:02:05.705: Couldn't delete ns: "node-lease-test-3036": an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1/namespaces/node-lease-test-3036/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1/namespaces/node-lease-test-3036/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0032aa7e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently 9.14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sNodeLease\swhen\sthe\sNodeLease\sfeature\sis\senabled\sthe\skubelet\sshould\sreport\snode\sstatus\sinfrequently$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00248c1e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/node-lease-test-2098/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/node-lease-test-2098/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/node-lease-test-2098/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] 47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sallow\sactiveDeadlineSeconds\sto\sbe\supdated\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:49:14.739: Couldn't delete ns: "pods-2868": an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-2868/resourcequotas\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-2868/resourcequotas\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002755020), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] 45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sbe\ssubmitted\sand\sremoved\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc0028a84e0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/pods-7796/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/pods-7796/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"pods-7796\" for [{ServiceAccount  default pods-7796}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "pods-7796" for [{ServiceAccount  default pods-7796}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/pods-7796/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew25.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should be updated [NodeConformance] [Conformance] 1m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sbe\supdated\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:50:40.905: Couldn't delete ns: "pods-3457": an error on the server ("Internal Server Error: \"/apis/apps/v1/namespaces/pods-3457/statefulsets\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1/namespaces/pods-3457/statefulsets\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001fc3080), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] 2m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\scontain\senvironment\svariables\sfor\sservices\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.StatusError | 0xc0009b60a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-9646/pods/server-envvars-369e07ed-c944-4c90-8f4b-1d96238849e1\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods server-envvars-369e07ed-c944-4c90-8f4b-1d96238849e1)",
            Reason: "InternalError",
            Details: {
                Name: "server-envvars-369e07ed-c944-4c90-8f4b-1d96238849e1",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/pods-9646/pods/server-envvars-369e07ed-c944-4c90-8f4b-1d96238849e1\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-9646/pods/server-envvars-369e07ed-c944-4c90-8f4b-1d96238849e1\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods server-envvars-369e07ed-c944-4c90-8f4b-1d96238849e1)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_skew11.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sget\sa\shost\sIP\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001718c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-5312/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/pods-5312/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-5312/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate] 45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\spod\sreadiness\sgates\s\[NodeFeature\:PodReadinessGate\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:56:15.031: Couldn't delete ns: "pods-3881": an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-3881\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces pods-3881) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-3881\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces pods-3881)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0020ba900), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] 1m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\sremote\scommand\sexecution\sover\swebsockets\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 21:05:09.635: Failed to open websocket to wss://104.197.5.96/api/v1/namespaces/pods-2525/pods/pod-exec-websocket-15286a7c-0248-4bc7-b852-064f74fe1578/exec?command=echo&command=remote+execution+test&container=main&stderr=1&stdout=1: websocket.Dial wss://104.197.5.96/api/v1/namespaces/pods-2525/pods/pod-exec-websocket-15286a7c-0248-4bc7-b852-064f74fe1578/exec?command=echo&command=remote+execution+test&container=main&stderr=1&stdout=1: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:577
				
				Click to see stdout/stderrfrom junit_skew02.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly] 27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPrivilegedPod\s\[NodeConformance\]\sshould\senable\sprivileged\scommands\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc000fac000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-privileged-pod-6365/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-privileged-pod-6365/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-privileged-pod-6365/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew23.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] 21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\s\/healthz\shttp\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001d635e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-5787/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-5787/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-5787/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] 5m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
getting pod 
Unexpected error:
    <*errors.StatusError | 0xc0022586e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-1678/pods/busybox-ba7b7344-b6f4-43c3-b2b3-09a2c74418fe\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods busybox-ba7b7344-b6f4-43c3-b2b3-09a2c74418fe)",
            Reason: "InternalError",
            Details: {
                Name: "busybox-ba7b7344-b6f4-43c3-b2b3-09a2c74418fe",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-1678/pods/busybox-ba7b7344-b6f4-43c3-b2b3-09a2c74418fe\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-1678/pods/busybox-ba7b7344-b6f4-43c3-b2b3-09a2c74418fe\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods busybox-ba7b7344-b6f4-43c3-b2b3-09a2c74418fe)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:439
				
				Click to see stdout/stderrfrom junit_skew11.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe 2m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\snon\-local\sredirect\shttp\sliveness\sprobe$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:246
getting pod 
Unexpected error:
    <*errors.StatusError | 0xc0001d0f00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-4368/pods/liveness-175b5ca2-041c-45eb-b62c-f6271e35cce2\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods liveness-175b5ca2-041c-45eb-b62c-f6271e35cce2)",
            Reason: "InternalError",
            Details: {
                Name: "liveness-175b5ca2-041c-45eb-b62c-f6271e35cce2",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-4368/pods/liveness-175b5ca2-041c-45eb-b62c-f6271e35cce2\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-4368/pods/liveness-175b5ca2-041c-45eb-b62c-f6271e35cce2\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods liveness-175b5ca2-041c-45eb-b62c-f6271e35cce2)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:439
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\stcp\:8080\sliveness\sprobe\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0008c5860>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-6583/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-6583/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-6583/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] 12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\s\/healthz\shttp\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc0028724a0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-probe-3149/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-probe-3149/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"container-probe-3149\" for [{ServiceAccount  default container-probe-3149}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "container-probe-3149" for [{ServiceAccount  default container-probe-3149}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-probe-3149/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] 1m11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:52:13.148: Couldn't delete ns: "container-probe-2901": an error on the server ("Internal Server Error: \"/apis/networking.k8s.io/v1beta1/namespaces/container-probe-2901/ingresses\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/networking.k8s.io/v1beta1/namespaces/container-probe-2901/ingresses\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00319c240), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe 23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\slocal\sredirect\shttp\sliveness\sprobe$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc0027b7560>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-probe-6704/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-probe-6704/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"container-probe-6704\" for [{ServiceAccount  default container-probe-6704}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "container-probe-6704" for [{ServiceAccount  default container-probe-6704}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-probe-6704/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew23.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] 2m54s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\shave\smonotonically\sincreasing\srestart\scount\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:55:29.795: Couldn't delete ns: "container-probe-8896": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-8896\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces container-probe-8896) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-8896\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces container-probe-8896)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002270a20), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] 39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sshould\snot\sbe\sready\sbefore\sinitial\sdelay\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:48:19.351: Couldn't delete ns: "container-probe-4119": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-4119\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces container-probe-4119) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-4119\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces container-probe-4119)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00241f9e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] 52s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Failed after 19.549s.
pod should not be ready
Error: Unexpected non-nil/non-zero extra argument at index 1:
	<*errors.StatusError>: &errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-2730/pods/test-webserver-e1783253-1da3-4056-b9cb-e11a49ca6621\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods test-webserver-e1783253-1da3-4056-b9cb-e11a49ca6621)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0030ee840), Code:500}}
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:102
				
				Click to see stdout/stderrfrom junit_skew21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID 46s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\snot\srun\swith\san\sexplicit\sroot\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:44:26.412: Couldn't delete ns: "security-context-test-1748": an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-1748/secrets\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-1748/secrets\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001a38120), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID 19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\snot\srun\swithout\sa\sspecified\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0021b2000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-6848/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/security-context-test-6848/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-6848/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID 34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\srun\swith\san\sexplicit\snon\-root\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:57:26.833: Couldn't delete ns: "security-context-test-2730": an error on the server ("Internal Server Error: \"/apis/crd-publish-openapi-test-waldo.k8s.io/v1beta1/namespaces/security-context-test-2730/e2e-test-crd-publish-openapi-5586-crds\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/crd-publish-openapi-test-waldo.k8s.io/v1beta1/namespaces/security-context-test-2730/e2e-test-crd-publish-openapi-5586-crds\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0026abce0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\srun\swith\san\simage\sspecified\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc000ef6140>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-2801/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/security-context-test-2801/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-2801/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance] 28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsUser\sshould\srun\sthe\scontainer\swith\suid\s0\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:59:14.020: Couldn't delete ns: "security-context-test-9030": an error on the server ("Internal Server Error: \"/apis/crd-publish-openapi-test-empty.k8s.io/v1/namespaces/security-context-test-9030/e2e-test-crd-publish-openapi-5132-crds\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/crd-publish-openapi-test-empty.k8s.io/v1/namespaces/security-context-test-9030/e2e-test-crd-publish-openapi-5132-crds\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0026e7620), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] 42s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsUser\sshould\srun\sthe\scontainer\swith\suid\s65534\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:52:16.113: Couldn't delete ns: "security-context-test-168": an error on the server ("Internal Server Error: \"/apis/apps/v1/namespaces/security-context-test-168/controllerrevisions\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1/namespaces/security-context-test-168/controllerrevisions\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001e84cc0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] 32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\spod\swith\sreadOnlyRootFilesystem\sshould\srun\sthe\scontainer\swith\sreadonly\srootfs\swhen\sreadOnlyRootFilesystem\=true\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:53:13.288: Couldn't delete ns: "security-context-test-5995": an error on the server ("Internal Server Error: \"/apis/crd-publish-openapi-test-waldo.k8s.io/v1beta1/namespaces/security-context-test-5995/e2e-test-crd-publish-openapi-5586-crds\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/crd-publish-openapi-test-waldo.k8s.io/v1beta1/namespaces/security-context-test-5995/e2e-test-crd-publish-openapi-5586-crds\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0020cdda0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] 15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\spod\swith\sreadOnlyRootFilesystem\sshould\srun\sthe\scontainer\swith\swritable\srootfs\swhen\sreadOnlyRootFilesystem\=false\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc002bf21e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-3036/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/security-context-test-3036/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-3036/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] 29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\snot\sexplicitly\sset\sand\suid\s\!\=\s0\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:51:07.456: Couldn't delete ns: "security-context-test-8116": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/security-context-test-8116/ingresses\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/security-context-test-8116/ingresses\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0021dc2a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance] 36s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\strue\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc001e7a400>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-533/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-533/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"security-context-test-533\" for [{ServiceAccount  default security-context-test-533}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "security-context-test-533" for [{ServiceAccount  default security-context-test-533}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-533/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] 22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\snot\sallow\sprivilege\sescalation\swhen\sfalse\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001d92000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-752/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/security-context-test-752/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-752/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Sysctls [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node 21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSysctls\s\[NodeFeature\:Sysctls\]\sshould\snot\slaunch\sunsafe\,\sbut\snot\sexplicitly\senabled\ssysctls\son\sthe\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 21:01:09.378: Couldn't delete ns: "sysctl-1652": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/sysctl-1652/replicasets\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/sysctl-1652/replicasets\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0022d4960), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Sysctls [NodeFeature:Sysctls] should reject invalid sysctls 9.50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSysctls\s\[NodeFeature\:Sysctls\]\sshould\sreject\sinvalid\ssysctls$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc004dc0dc0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/sysctl-1491/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/sysctl-1491/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/sysctl-1491/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew13.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Sysctls [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted 28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSysctls\s\[NodeFeature\:Sysctls\]\sshould\ssupport\sunsafe\ssysctls\swhich\sare\sactually\swhitelisted$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:103
Unexpected error:
    <*errors.StatusError | 0xc0018d86e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/sysctl-162/pods/sysctl-90de3d1d-ce48-43fa-8cbb-4b8a542f4a3f/log?container=test-container&amp;previous=false\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods sysctl-90de3d1d-ce48-43fa-8cbb-4b8a542f4a3f)",
            Reason: "InternalError",
            Details: {
                Name: "sysctl-90de3d1d-ce48-43fa-8cbb-4b8a542f4a3f",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/sysctl-162/pods/sysctl-90de3d1d-ce48-43fa-8cbb-4b8a542f4a3f/log?container=test-container&amp;previous=false\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/sysctl-162/pods/sysctl-90de3d1d-ce48-43fa-8cbb-4b8a542f4a3f/log?container=test-container&amp;previous=false\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods sysctl-90de3d1d-ce48-43fa-8cbb-4b8a542f4a3f)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/sysctl.go:140
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] 58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\scomposing\senv\svars\sinto\snew\senv\svars\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:46:21.081: Couldn't delete ns: "var-expansion-1200": an error on the server ("Internal Server Error: \"/apis/coordination.k8s.io/v1/namespaces/var-expansion-1200/leases\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/coordination.k8s.io/v1/namespaces/var-expansion-1200/leases\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0022d9620), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew05.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] 57s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\scontainer\'s\scommand\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc0028456c0>: {
        s: "failed to get logs from var-expansion-0a7d3459-0304-456c-b127-47f8756c331d for dapi-container: an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/var-expansion-3684/pods/var-expansion-0a7d3459-0304-456c-b127-47f8756c331d/log?container=dapi-container&amp;previous=false\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods var-expansion-0a7d3459-0304-456c-b127-47f8756c331d)",
    }
    failed to get logs from var-expansion-0a7d3459-0304-456c-b127-47f8756c331d for dapi-container: an error on the server ("Internal Server Error: \"/api/v1/namespaces/var-expansion-3684/pods/var-expansion-0a7d3459-0304-456c-b127-47f8756c331d/log?container=dapi-container&amp;previous=false\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods var-expansion-0a7d3459-0304-456c-b127-47f8756c331d)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2342
				
				Click to see stdout/stderrfrom junit_skew20.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage][NodeFeature:VolumeSubpathEnvExpansion] 10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\svolume\ssubpath\s\[sig\-storage\]\[NodeFeature\:VolumeSubpathEnvExpansion\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 21:04:18.132: Couldn't delete ns: "var-expansion-7417": an error on the server ("Internal Server Error: \"/api/v1/namespaces/var-expansion-7417\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces var-expansion-7417) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/var-expansion-7417\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces var-expansion-7417)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0020cd260), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined 28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sAppArmor\sload\sAppArmor\sprofiles\scan\sdisable\san\sAppArmor\sprofile\,\susing\sunconfined$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:31
Failed to run apparmor-loader Pod
Unexpected error:
    <*errors.StatusError | 0xc0021761e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/apparmor-2808/pods/apparmor-loader-mvz6f\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods apparmor-loader-mvz6f)",
            Reason: "InternalError",
            Details: {
                Name: "apparmor-loader-mvz6f",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/apparmor-2808/pods/apparmor-loader-mvz6f\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/apparmor-2808/pods/apparmor-loader-mvz6f\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods apparmor-loader-mvz6f)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/apparmor.go:248
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sAppArmor\sload\sAppArmor\sprofiles\sshould\senforce\san\sAppArmor\sprofile$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/apparmor.go:42
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc000d488c0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/apparmor-5670/pods\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/apparmor-5670/pods\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/apparmor-5670/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post pods)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:80
				
				Click to see stdout/stderrfrom junit_skew11.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] 38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sEvents\sshould\sbe\ssent\sby\skubelets\sand\sthe\sscheduler\sabout\spods\sscheduling\sand\srunning\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  5 20:59:15.311: Failed to create pod: an error on the server ("Internal Server Error: \"/api/v1/namespaces/events-7542/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post pods)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/events.go:76
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Mount propagation should propagate mounts to the host 33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sMount\spropagation\sshould\spropagate\smounts\sto\sthe\shost$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc006a47da0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/mount-propagation-1742/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/mount-propagation-1742/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"mount-propagation-1742\" for [{ServiceAccount  default mount-propagation-1742}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "mount-propagation-1742" for [{ServiceAccount  default mount-propagation-1742}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/mount-propagation-1742/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] should run without error 21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sNodeProblemDetector\s\[DisabledForLargeClusters\]\sshould\srun\swithout\serror$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0022ed2c0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/node-problem-detector-5707/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/node-problem-detector-5707/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/node-problem-detector-5707/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] 34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sDelete\sGrace\sPeriod\sshould\sbe\ssubmitted\sand\sremoved\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:45:16.563: Couldn't delete ns: "pods-1579": an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-1579/configmaps\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-1579/configmaps\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0014aeb40), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sPods\sSet\sQOS\sClass\sshould\sbe\ssubmitted\sand\sremoved\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc002275cc0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-2431/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/pods-2431/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-2431/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process 1m48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPreStop\sgraceful\spod\sterminated\sshould\swait\suntil\spreStop\shook\scompletes\sthe\sprocess$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 21:06:46.354: Couldn't delete ns: "prestop-5068": an error on the server ("Internal Server Error: \"/api/v1/namespaces/prestop-5068\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces prestop-5068) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/prestop-5068\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces prestop-5068)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002df07e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] 38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPreStop\sshould\scall\sprestop\swhen\skilling\sa\spod\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:51:29.949: Couldn't delete ns: "prestop-831": an error on the server ("Internal Server Error: \"/api/v1/namespaces/prestop-831\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces prestop-831) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/prestop-831\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces prestop-831)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00296f200), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew05.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly] 23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\scontainer\.SecurityContext\.RunAsUser\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00285c460>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-2560/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/security-context-2560/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-2560/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] 24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.RunAsUser\sAnd\spod\.Spec\.SecurityContext\.RunAsGroup\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:86
Feb  5 20:59:21.945: Failed to delete pod "security-context-293d7e45-c553-44f4-8a31-b4d6c50b4fa8": an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-4371/pods/security-context-293d7e45-c553-44f4-8a31-b4d6c50b4fa8\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete pods security-context-293d7e45-c553-44f4-8a31-b4d6c50b4fa8)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
				
				Click to see stdout/stderrfrom junit_skew06.xml

Find security-context-293d7e45-c553-44f4-8a31-b4d6c50b4fa8 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] 14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.RunAsUser\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  5 20:44:14.507: Couldn't delete ns: "security-context-8913": an error on the server ("Internal Server Error: \"/apis/batch/v1/namespaces/security-context-8913/jobs\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1/namespaces/security-context-8913/jobs\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00212a840), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] 19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.SupplementalGroups\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001e52d20>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-478/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/security-context-478/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-478/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. 2m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\skubelet\s\[k8s\.io\]\s\[sig\-node\]\sClean\sup\spods\son\snode\skubelet\sshould\sbe\sable\sto\sdelete\s10\spods\sper\snode\sin\s1m0s\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:300
Unexpected error:
    <*errors.StatusError | 0xc00219a0a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/nodes/gke-bootstrap-e2e-default-pool-f9126390-hfq0\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (put nodes gke-bootstrap-e2e-default-pool-f9126390-hfq0)",
            Reason: "InternalError",
            Details: {
                Name: "gke-bootstrap-e2e-default-pool-f9126390-hfq0",
                Group: "",
                Kind: "nodes",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/nodes/gke-bootstrap-e2e-default-pool-f9126390-hfq0\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/nodes/gke-bootstrap-e2e-default-pool-f9126390-hfq0\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (put nodes gke-bootstrap-e2e-default-pool-f9126390-hfq0)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:2781
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny attaching pod 1m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\sattaching\spod$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:133
failed to create pod to-be-attached-pod in namespace: webhook-4343
Unexpected error:
    <*errors.StatusError | 0xc0004f17c0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-4343/pods\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-4343/pods\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-4343/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post pods)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:820
				
				Click to see stdout/stderrfrom junit_skew18.xml

Find to-be-attached-pod mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny custom resource creation and deletion 1m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\scustom\sresource\screation\sand\sdeletion$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:100
waiting for service webhook-3808/e2e-test-webhook have 1 endpoint
Unexpected error:
    <*errors.StatusError | 0xc001161cc0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-3808/endpoints\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get endpoints)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "endpoints",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-3808/endpoints\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-3808/endpoints\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get endpoints)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:411