This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 722 failed / 72 succeeded
Started2020-02-06 07:44
Elapsed1h33m
Revision
Buildergke-prow-default-pool-cf4891d4-wlz7
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/fdbcf025-e030-45f0-ad68-7c0c838ea84f/targets/test'}}
pod6d53f2f9-48b4-11ea-830c-2e3252add0ae
resultstorehttps://source.cloud.google.com/results/invocations/fdbcf025-e030-45f0-ad68-7c0c838ea84f/targets/test
infra-commit155cb3ab8
job-versionv1.16.7-beta.0.23+0a70c2fa6d4642
master_os_image
node_os_imagecos-77-12371-89-0
pod6d53f2f9-48b4-11ea-830c-2e3252add0ae
revisionv1.16.7-beta.0.23+0a70c2fa6d4642

Test Failures


Cluster downgrade hpa-upgrade 15m45s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\sdowngrade\shpa\-upgrade$'
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc000306890>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).test(0x7ea44e0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:90 +0x3e3
k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).Setup(0x7ea44e0, 0xc000bc8b40)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:60 +0x1f7
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc00115de80, 0xc0020c9da0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:395 +0x2c1
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0020c9da0, 0xc00038bfe0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] 17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\spoststart\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.StatusError | 0xc001716dc0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-lifecycle-hook-599/pods/pod-with-poststart-http-hook\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods pod-with-poststart-http-hook)",
            Reason: "InternalError",
            Details: {
                Name: "pod-with-poststart-http-hook",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-599/pods/pod-with-poststart-http-hook\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-599/pods/pod-with-poststart-http-hook\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods pod-with-poststart-http-hook)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_skew15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] 58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\sexec\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
Unexpected error:
    <*errors.StatusError | 0xc001bfe0a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-lifecycle-hook-934/pods/pod-handle-http-request\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods pod-handle-http-request)",
            Reason: "InternalError",
            Details: {
                Name: "pod-handle-http-request",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-934/pods/pod-handle-http-request\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-934/pods/pod-handle-http-request\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods pod-handle-http-request)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] 44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 08:46:21.341: Failed to delete pod "pod-with-prestop-http-hook": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-1548/pods/pod-with-prestop-http-hook\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete pods pod-with-prestop-http-hook)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
				
				Click to see stdout/stderrfrom junit_skew14.xml

Find pod-with-prestop-http-hook mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] 22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sas\sempty\swhen\spod\ssucceeds\sand\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0017799a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-9994/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-9994/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-9994/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] 33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sfrom\sfile\swhen\spod\ssucceeds\sand\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Expected success, but got an error:
    <*errors.StatusError | 0xc00188fa40>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-868/pods/termination-message-containerabb71a71-3d6a-45e3-ac3f-6e4f33984773\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete pods termination-message-containerabb71a71-3d6a-45e3-ac3f-6e4f33984773)",
            Reason: "InternalError",
            Details: {
                Name: "termination-message-containerabb71a71-3d6a-45e3-ac3f-6e4f33984773",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-868/pods/termination-message-containerabb71a71-3d6a-45e3-ac3f-6e4f33984773\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-868/pods/termination-message-containerabb71a71-3d6a-45e3-ac3f-6e4f33984773\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete pods termination-message-containerabb71a71-3d6a-45e3-ac3f-6e4f33984773)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:161
				
				Click to see stdout/stderrfrom junit_skew11.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] 1m32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sfrom\slog\soutput\sif\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc002286be0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-7897/pods\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-7897/pods\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-7897/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post pods)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:80
				
				Click to see stdout/stderrfrom junit_skew20.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] 35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sif\sTerminationMessagePath\sis\sset\sas\snon\-root\suser\sand\sat\sa\snon\-default\spath\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:53:19.972: Couldn't delete ns: "container-runtime-8681": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-8681/endpoints\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-8681/endpoints\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00207c6c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [LinuxOnly] [NodeConformance] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\sbe\sable\sto\spull\sfrom\sprivate\sregistry\swith\ssecret\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc0019953a0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-runtime-6685/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-runtime-6685/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"container-runtime-6685\" for [{ServiceAccount  default container-runtime-6685}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "container-runtime-6685" for [{ServiceAccount  default container-runtime-6685}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-runtime-6685/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [LinuxOnly] [NodeConformance] 49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\sbe\sable\sto\spull\simage\sfrom\sdocker\shub\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:47:35.843: Couldn't delete ns: "container-runtime-3798": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-3798\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces container-runtime-3798) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-3798\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces container-runtime-3798)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001619e60), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [LinuxOnly] [NodeConformance] 1m51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\sbe\sable\sto\spull\simage\sfrom\sgcr\.io\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:45:11.935: Couldn't delete ns: "container-runtime-1129": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-1129/configmaps\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-1129/configmaps\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc003967f80), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\snot\sbe\sable\sto\spull\sfrom\sprivate\sregistry\swithout\ssecret\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:40:14.367: Couldn't delete ns: "container-runtime-519": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-519/resourcequotas\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-519/resourcequotas\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00218e060), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance] 35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\snot\sbe\sable\sto\spull\simage\sfrom\sinvalid\sregistry\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0026180a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-4732/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-4732/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-4732/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io [NodeConformance] 48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\snot\sbe\sable\sto\spull\snon\-existing\simage\sfrom\sgcr\.io\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:59:18.946: Couldn't delete ns: "container-runtime-115": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/container-runtime-115/networkpolicies\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/container-runtime-115/networkpolicies\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002960fc0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] 24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\sstarting\sa\scontainer\sthat\sexits\sshould\srun\swith\sthe\sexpected\sstatus\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc000e09cc0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-8517/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-8517/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-8517/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] 27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\sbe\sable\sto\soverride\sthe\simage\'s\sdefault\sarguments\s\(docker\scmd\)\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc0027f2da0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/containers-7887/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/containers-7887/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"containers-7887\" for [{ServiceAccount  default containers-7887}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "containers-7887" for [{ServiceAccount  default containers-7887}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/containers-7887/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew02.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] 10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\sbe\sable\sto\soverride\sthe\simage\'s\sdefault\scommand\s\(docker\sentrypoint\)\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc002079d60>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/containers-3032/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/containers-3032/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/containers-3032/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew20.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] 1m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\sbe\sable\sto\soverride\sthe\simage\'s\sdefault\scommand\sand\sarguments\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  6 09:01:24.840: Failed to delete pod "client-containers-edd49af1-41b6-45ab-82b2-fec4a0f9d89b": an error on the server ("Internal Server Error: \"/api/v1/namespaces/containers-2692/pods/client-containers-edd49af1-41b6-45ab-82b2-fec4a0f9d89b\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete pods client-containers-edd49af1-41b6-45ab-82b2-fec4a0f9d89b)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
				
				Click to see stdout/stderrfrom junit_skew05.xml

Find client-containers-edd49af1-41b6-45ab-82b2-fec4a0f9d89b mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] 32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\suse\sthe\simage\sdefaults\sif\scommand\sand\sargs\sare\sblank\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:46:23.719: Couldn't delete ns: "containers-7003": an error on the server ("Internal Server Error: \"/apis/batch/v1beta1/namespaces/containers-7003/cronjobs\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/batch/v1beta1/namespaces/containers-7003/cronjobs\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0026b6120), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] 10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\sinvoke\sinit\scontainers\son\sa\sRestartAlways\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001b52a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/init-container-7341/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/init-container-7341/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/init-container-7341/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] 33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\sinvoke\sinit\scontainers\son\sa\sRestartNever\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:46:18.304: Couldn't delete ns: "init-container-1437": an error on the server ("Internal Server Error: \"/api/v1/namespaces/init-container-1437\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces init-container-1437) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/init-container-1437\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces init-container-1437)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00173d0e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] 18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\snot\sstart\sapp\scontainers\sand\sfail\sthe\spod\sif\sinit\scontainers\sfail\son\sa\sRestartNever\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0035583c0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/init-container-9525/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/init-container-9525/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/init-container-9525/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] 48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\snot\sstart\sapp\scontainers\sif\sinit\scontainers\sfail\son\sa\sRestartAlways\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:52:56.652: Couldn't delete ns: "init-container-3095": an error on the server ("Internal Server Error: \"/api/v1/namespaces/init-container-3095\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces init-container-3095) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/init-container-3095\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces init-container-3095)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00244f800), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew02.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] 48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\shave\san\sterminated\sreason\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001871220>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-test-2902/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/kubelet-test-2902/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-test-2902/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] 21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sread\sonly\sbusybox\scontainer\sshould\snot\swrite\sto\sroot\sfilesystem\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0014feaa0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-test-97/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/kubelet-test-97/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-test-97/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] 1m27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubeletManagedEtcHosts\sshould\stest\skubelet\smanaged\s\/etc\/hosts\sfile\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
failed to execute command in pod test-pod, container busybox-1: unable to upgrade connection: Internal Server Error: "/api/v1/namespaces/e2e-kubelet-etc-hosts-6802/pods/test-pod/exec?command=cat&amp;command=%2Fetc%2Fhosts&amp;container=busybox-1&amp;container=busybox-1&amp;stderr=true&amp;stdout=true": the server has received too many requests and has asked us to try again later
Unexpected error:
    <*errors.errorString | 0xc002821cc0>: {
        s: "unable to upgrade connection: Internal Server Error: \"/api/v1/namespaces/e2e-kubelet-etc-hosts-6802/pods/test-pod/exec?command=cat&amp;command=%2Fetc%2Fhosts&amp;container=busybox-1&amp;container=busybox-1&amp;stderr=true&amp;stdout=true\": the server has received too many requests and has asked us to try again later",
    }
    unable to upgrade connection: Internal Server Error: "/api/v1/namespaces/e2e-kubelet-etc-hosts-6802/pods/test-pod/exec?command=cat&amp;command=%2Fetc%2Fhosts&amp;container=busybox-1&amp;container=busybox-1&amp;stderr=true&amp;stdout=true": the server has received too many requests and has asked us to try again later
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:102
				
				Click to see stdout/stderrfrom junit_skew02.xml

Find test-pod, mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace 12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sNodeLease\swhen\sthe\sNodeLease\sfeature\sis\senabled\sthe\skubelet\sshould\screate\sand\supdate\sa\slease\sin\sthe\skube\-node\-lease\snamespace$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc002308ae0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/node-lease-test-6268/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/node-lease-test-6268/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"node-lease-test-6268\" for [{ServiceAccount  default node-lease-test-6268}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "node-lease-test-6268" for [{ServiceAccount  default node-lease-test-6268}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/node-lease-test-6268/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently 18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sNodeLease\swhen\sthe\sNodeLease\sfeature\sis\senabled\sthe\skubelet\sshould\sreport\snode\sstatus\sinfrequently$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:42:26.068: Couldn't delete ns: "node-lease-test-207": an error on the server ("Internal Server Error: \"/api/v1/namespaces/node-lease-test-207\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces node-lease-test-207) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/node-lease-test-207\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces node-lease-test-207)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001f99c20), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] 27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sallow\sactiveDeadlineSeconds\sto\sbe\supdated\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001f8e280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-6427/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/pods-6427/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-6427/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] 1m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sbe\ssubmitted\sand\sremoved\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
failed to query for pods
Unexpected error:
    <*errors.StatusError | 0xc0028e80a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-6005/pods?labelSelector=time%3D721681829\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/pods-6005/pods?labelSelector=time%3D721681829\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-6005/pods?labelSelector=time%3D721681829\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:338
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should be updated [NodeConformance] [Conformance] 1m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sbe\supdated\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.errorString | 0xc004ddc400>: {
        s: "failed to update pod \"pod-update-04fde7d4-6e27-4144-9b73-b84162621645\": an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-1757/pods/pod-update-04fde7d4-6e27-4144-9b73-b84162621645\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (put pods pod-update-04fde7d4-6e27-4144-9b73-b84162621645)",
    }
    failed to update pod "pod-update-04fde7d4-6e27-4144-9b73-b84162621645": an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-1757/pods/pod-update-04fde7d4-6e27-4144-9b73-b84162621645\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (put pods pod-update-04fde7d4-6e27-4144-9b73-b84162621645)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:145
				
				Click to see stdout/stderrfrom junit_skew03.xml

Find pod-update-04fde7d4-6e27-4144-9b73-b84162621645 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] 23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\scontain\senvironment\svariables\sfor\sservices\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc000369a40>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-492/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/pods-492/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-492/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew04.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] 20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sget\sa\shost\sIP\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:56:21.321: Couldn't delete ns: "pods-9327": an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-9327\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces pods-9327) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-9327\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces pods-9327)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002d6e180), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate] 17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\spod\sreadiness\sgates\s\[NodeFeature\:PodReadinessGate\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc003488320>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-6174/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/pods-6174/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-6174/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly] 11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPrivilegedPod\s\[NodeConformance\]\sshould\senable\sprivileged\scommands\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc002b60960>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-privileged-pod-2936/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-privileged-pod-2936/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-privileged-pod-2936/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] 3m1s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\s\/healthz\shttp\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
getting pod 
Unexpected error:
    <*errors.StatusError | 0xc0009d4460>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-9002/pods/test-webserver-145297b3-3324-4f25-85fe-9acb80ef4ee6\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods test-webserver-145297b3-3324-4f25-85fe-9acb80ef4ee6)",
            Reason: "InternalError",
            Details: {
                Name: "test-webserver-145297b3-3324-4f25-85fe-9acb80ef4ee6",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-9002/pods/test-webserver-145297b3-3324-4f25-85fe-9acb80ef4ee6\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-9002/pods/test-webserver-145297b3-3324-4f25-85fe-9acb80ef4ee6\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods test-webserver-145297b3-3324-4f25-85fe-9acb80ef4ee6)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:439
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe 25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\snon\-local\sredirect\shttp\sliveness\sprobe$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:246
starting pod liveness-b2fbf3fe-fb63-4c54-a35a-ba3367c9c2e2 in namespace container-probe-5020
Unexpected error:
    <*errors.StatusError | 0xc001c450e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-5020/pods/liveness-b2fbf3fe-fb63-4c54-a35a-ba3367c9c2e2\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods liveness-b2fbf3fe-fb63-4c54-a35a-ba3367c9c2e2)",
            Reason: "InternalError",
            Details: {
                Name: "liveness-b2fbf3fe-fb63-4c54-a35a-ba3367c9c2e2",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-5020/pods/liveness-b2fbf3fe-fb63-4c54-a35a-ba3367c9c2e2\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-5020/pods/liveness-b2fbf3fe-fb63-4c54-a35a-ba3367c9c2e2\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods liveness-b2fbf3fe-fb63-4c54-a35a-ba3367c9c2e2)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:422
				
				Click to see stdout/stderrfrom junit_skew21.xml

Find liveness-b2fbf3fe-fb63-4c54-a35a-ba3367c9c2e2 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\stcp\:8080\sliveness\sprobe\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0020b8320>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-3304/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-3304/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-3304/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew05.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] 22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\s\/healthz\shttp\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001397f40>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-3863/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-3863/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-3863/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] 11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc00171efc0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-probe-4883/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-probe-4883/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"container-probe-4883\" for [{ServiceAccount  default container-probe-4883}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "container-probe-4883" for [{ServiceAccount  default container-probe-4883}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/container-probe-4883/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe 40s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\slocal\sredirect\shttp\sliveness\sprobe$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:56:29.999: Couldn't delete ns: "container-probe-4154": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-4154\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces container-probe-4154) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-4154\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces container-probe-4154)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001e9ea20), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] 53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\shave\smonotonically\sincreasing\srestart\scount\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
getting pod 
Unexpected error:
    <*errors.StatusError | 0xc001be3220>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-1350/pods/liveness-aeaa9e76-07ef-490b-9f87-a37dd8a1f900\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods liveness-aeaa9e76-07ef-490b-9f87-a37dd8a1f900)",
            Reason: "InternalError",
            Details: {
                Name: "liveness-aeaa9e76-07ef-490b-9f87-a37dd8a1f900",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-1350/pods/liveness-aeaa9e76-07ef-490b-9f87-a37dd8a1f900\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-1350/pods/liveness-aeaa9e76-07ef-490b-9f87-a37dd8a1f900\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods liveness-aeaa9e76-07ef-490b-9f87-a37dd8a1f900)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:439
				
				Click to see stdout/stderrfrom junit_skew24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sshould\snot\sbe\sready\sbefore\sinitial\sdelay\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc003651c20>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-7940/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-7940/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-7940/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew20.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] 1m29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:49:35.474: Couldn't delete ns: "container-probe-3157": an error on the server ("Internal Server Error: \"/apis/cloud.google.com/v1beta1/namespaces/container-probe-3157/backendconfigs\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/cloud.google.com/v1beta1/namespaces/container-probe-3157/backendconfigs\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0026f48a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID 1m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\snot\srun\swith\san\sexplicit\sroot\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 09:01:37.168: Couldn't delete ns: "security-context-test-7623": an error on the server ("Internal Server Error: \"/apis/apps/v1/namespaces/security-context-test-7623/deployments\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1/namespaces/security-context-test-7623/deployments\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002062120), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID 42s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\snot\srun\swithout\sa\sspecified\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:45:29.823: Couldn't delete ns: "security-context-test-6318": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/security-context-test-6318/ingresses\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/security-context-test-6318/ingresses\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002537260), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID 35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\srun\swith\san\sexplicit\snon\-root\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:43:20.791: Couldn't delete ns: "security-context-test-1480": an error on the server ("Internal Server Error: \"/apis/apps/v1/namespaces/security-context-test-1480/replicasets\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1/namespaces/security-context-test-1480/replicasets\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0033da7e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID 33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\srun\swith\san\simage\sspecified\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:58:27.594: Couldn't delete ns: "security-context-test-7235": an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-7235/replicationcontrollers\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-7235/replicationcontrollers\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00294b2c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsUser\sshould\srun\sthe\scontainer\swith\suid\s0\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:45:15.454: Couldn't delete ns: "security-context-test-3241": an error on the server ("Internal Server Error: \"/apis/policy/v1beta1/namespaces/security-context-test-3241/poddisruptionbudgets\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/policy/v1beta1/namespaces/security-context-test-3241/poddisruptionbudgets\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0019c66c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsUser\sshould\srun\sthe\scontainer\swith\suid\s65534\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc00154ba40>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-7459/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-7459/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"security-context-test-7459\" for [{ServiceAccount  default security-context-test-7459}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "security-context-test-7459" for [{ServiceAccount  default security-context-test-7459}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-7459/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] 19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\spod\swith\sprivileged\sshould\srun\sthe\scontainer\sas\sunprivileged\swhen\sfalse\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:39:45.499: Couldn't delete ns: "security-context-test-874": an error on the server ("Internal Server Error: \"/apis/apps/v1/namespaces/security-context-test-874/replicasets\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1/namespaces/security-context-test-874/replicasets\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001ee4de0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] 34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\spod\swith\sreadOnlyRootFilesystem\sshould\srun\sthe\scontainer\swith\sreadonly\srootfs\swhen\sreadOnlyRootFilesystem\=true\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc000a48e60>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-121/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/security-context-test-121/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-121/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] 35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\spod\swith\sreadOnlyRootFilesystem\sshould\srun\sthe\scontainer\swith\swritable\srootfs\swhen\sreadOnlyRootFilesystem\=false\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:50:14.394: Couldn't delete ns: "security-context-test-8531": an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-8531/limitranges\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-8531/limitranges\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001cb68a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] 25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\snot\sexplicitly\sset\sand\suid\s\!\=\s0\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00269ad20>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-8706/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/security-context-test-8706/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-8706/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew11.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance] 9.81s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\strue\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc003290780>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "Unauthorized",
                    Reason: "Unauthorized",
                    Details: nil,
                    Code: 401,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"security-context-test-2400\" for [{ServiceAccount  default security-context-test-2400}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "security-context-test-2400" for [{ServiceAccount  default security-context-test-2400}]: Unauthorized
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Sysctls [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node 27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSysctls\s\[NodeFeature\:Sysctls\]\sshould\snot\slaunch\sunsafe\,\sbut\snot\sexplicitly\senabled\ssysctls\son\sthe\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:45:17.459: Couldn't delete ns: "sysctl-1028": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/sysctl-1028/deployments\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/sysctl-1028/deployments\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0033bdd40), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Sysctls [NodeFeature:Sysctls] should reject invalid sysctls 22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSysctls\s\[NodeFeature\:Sysctls\]\sshould\sreject\sinvalid\ssysctls$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:57:17.048: Couldn't delete ns: "sysctl-7470": an error on the server ("Internal Server Error: \"/apis/policy/v1beta1/namespaces/sysctl-7470/poddisruptionbudgets\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/policy/v1beta1/namespaces/sysctl-7470/poddisruptionbudgets\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002737ec0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Sysctls [NodeFeature:Sysctls] should support sysctls 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSysctls\s\[NodeFeature\:Sysctls\]\sshould\ssupport\ssysctls$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:42:16.636: Couldn't delete ns: "sysctl-265": an error on the server ("Internal Server Error: \"/apis/networking.gke.io/v1beta1/namespaces/sysctl-265/managedcertificates\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/networking.gke.io/v1beta1/namespaces/sysctl-265/managedcertificates\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001e6c0c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Sysctls [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted 44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSysctls\s\[NodeFeature\:Sysctls\]\sshould\ssupport\sunsafe\ssysctls\swhich\sare\sactually\swhitelisted$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 09:03:24.523: Couldn't delete ns: "sysctl-903": an error on the server ("Internal Server Error: \"/apis/networking.k8s.io/v1beta1/namespaces/sysctl-903/ingresses\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/networking.k8s.io/v1beta1/namespaces/sysctl-903/ingresses\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00259c6c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\scontainer\'s\sargs\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc003394e40>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/var-expansion-5926/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/var-expansion-5926/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"var-expansion-5926\" for [{ServiceAccount  default var-expansion-5926}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "var-expansion-5926" for [{ServiceAccount  default var-expansion-5926}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/var-expansion-5926/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] 37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\scontainer\'s\scommand\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:58:25.672: Couldn't delete ns: "var-expansion-8464": an error on the server ("Internal Server Error: \"/apis/networking.k8s.io/v1beta1/namespaces/var-expansion-8464/ingresses\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/networking.k8s.io/v1beta1/namespaces/var-expansion-8464/ingresses\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002b7aba0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew25.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage][NodeFeature:VolumeSubpathEnvExpansion] 19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\svolume\ssubpath\s\[sig\-storage\]\[NodeFeature\:VolumeSubpathEnvExpansion\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:39:44.845: Couldn't delete ns: "var-expansion-2422": an error on the server ("Internal Server Error: \"/apis/kubectl-crd-test.k8s.io/v1/namespaces/var-expansion-2422/e2e-test-kubectl-3070-crds\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/kubectl-crd-test.k8s.io/v1/namespaces/var-expansion-2422/e2e-test-kubectl-3070-crds\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc003101440), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sAppArmor\sload\sAppArmor\sprofiles\scan\sdisable\san\sAppArmor\sprofile\,\susing\sunconfined$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00113e140>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/apparmor-9125/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/apparmor-9125/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/apparmor-9125/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile 47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sAppArmor\sload\sAppArmor\sprofiles\sshould\senforce\san\sAppArmor\sprofile$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 09:00:30.836: Couldn't delete ns: "apparmor-8449": an error on the server ("Internal Server Error: \"/api/v1/namespaces/apparmor-8449\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces apparmor-8449) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/apparmor-8449\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces apparmor-8449)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002199440), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] 28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sEvents\sshould\sbe\ssent\sby\skubelets\sand\sthe\sscheduler\sabout\spods\sscheduling\sand\srunning\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0017b0460>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/events-677/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/events-677/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/events-677/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Mount propagation should propagate mounts to the host 19s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sMount\spropagation\sshould\spropagate\smounts\sto\sthe\shost$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001d30280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/mount-propagation-529/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/mount-propagation-529/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/mount-propagation-529/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew25.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] should run without error 2m36s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sNodeProblemDetector\s\[DisabledForLargeClusters\]\sshould\srun\swithout\serror$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:46:28.605: Couldn't delete ns: "node-problem-detector-4294": an error on the server ("Internal Server Error: \"/api/v1/namespaces/node-problem-detector-4294\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces node-problem-detector-4294) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/node-problem-detector-4294\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces node-problem-detector-4294)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0032f81e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew20.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] 32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sDelete\sGrace\sPeriod\sshould\sbe\ssubmitted\sand\sremoved\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:41:30.819: Couldn't delete ns: "pods-5868": an error on the server ("Internal Server Error: \"/apis/apps/v1/namespaces/pods-5868/deployments\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1/namespaces/pods-5868/deployments\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0033bc1e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] 34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sPods\sSet\sQOS\sClass\sshould\sbe\ssubmitted\sand\sremoved\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:50:26.181: Couldn't delete ns: "pods-7916": an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-7916/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-7916/pods\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0018d37a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process 1m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPreStop\sgraceful\spod\sterminated\sshould\swait\suntil\spreStop\shook\scompletes\sthe\sprocess$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:52:44.231: Couldn't delete ns: "prestop-151": an error on the server ("Internal Server Error: \"/api/v1/namespaces/prestop-151\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces prestop-151) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/prestop-151\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces prestop-151)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0025b9ce0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew20.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] 25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPreStop\sshould\scall\sprestop\swhen\skilling\sa\spod\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001a24460>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/prestop-3241/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/prestop-3241/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/prestop-3241/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] SSH should SSH to all nodes and run commands 27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSSH\sshould\sSSH\sto\sall\snodes\sand\srun\scommands$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 09:01:15.924: Couldn't delete ns: "ssh-5258": an error on the server ("Internal Server Error: \"/apis/networking.gke.io/v1beta1/namespaces/ssh-5258/managedcertificates\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/networking.gke.io/v1beta1/namespaces/ssh-5258/managedcertificates\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0018885a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly] 32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\scontainer\.SecurityContext\.RunAsUser\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:100
Feb  6 08:49:29.844: Failed to delete pod "security-context-e31f8532-4503-48cf-a021-5cec7f7ffe66": an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-531/pods/security-context-e31f8532-4503-48cf-a021-5cec7f7ffe66\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete pods security-context-e31f8532-4503-48cf-a021-5cec7f7ffe66)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
				
				Click to see stdout/stderrfrom junit_skew13.xml

Find security-context-e31f8532-4503-48cf-a021-5cec7f7ffe66 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] 23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.RunAsUser\sAnd\spod\.Spec\.SecurityContext\.RunAsGroup\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:86
Feb  6 08:45:14.171: Failed to delete pod "security-context-122d7458-87f9-4750-9dc3-353e7ed1dd2f": an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-9122/pods/security-context-122d7458-87f9-4750-9dc3-353e7ed1dd2f\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete pods security-context-122d7458-87f9-4750-9dc3-353e7ed1dd2f)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
				
				Click to see stdout/stderrfrom junit_skew22.xml

Find security-context-122d7458-87f9-4750-9dc3-353e7ed1dd2f mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] 23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.RunAsUser\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc0033cfba0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-7848/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-7848/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"security-context-7848\" for [{ServiceAccount  default security-context-7848}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "security-context-7848" for [{ServiceAccount  default security-context-7848}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-7848/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] 33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.SupplementalGroups\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:43:17.954: Couldn't delete ns: "security-context-1220": an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-1220/configmaps\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-1220/configmaps\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002030e40), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] crictl should be able to run crictl on the node 43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\scrictl\sshould\sbe\sable\sto\srun\scrictl\son\sthe\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:54:22.355: Couldn't delete ns: "crictl-3550": an error on the server ("Internal Server Error: \"/api/v1/namespaces/crictl-3550/events\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/crictl-3550/events\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001bd5380), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. 26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\skubelet\s\[k8s\.io\]\s\[sig\-node\]\sClean\sup\spods\son\snode\skubelet\sshould\sbe\sable\sto\sdelete\s10\spods\sper\snode\sin\s1m0s\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00037b4a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-4391/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/kubelet-4391/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-4391/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny attaching pod 1m18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\sattaching\spod$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:133
registering webhook config e2e-test-webhook-config-attaching-pod with namespace webhook-637
Unexpected error:
    <*errors.StatusError | 0xc001eda140>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post validatingwebhookconfigurations.admissionregistration.k8s.io)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "admissionregistration.k8s.io",
                Kind: "validatingwebhookconfigurations",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/apis/admissionregistration.k8s.io/v1beta1/validatingwebhookconfigurations\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post validatingwebhookconfigurations.admissionregistration.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:535
				
				Click to see stdout/stderrfrom junit_skew21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny custom resource creation and deletion 1m28s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\scustom\sresource\screation\sand\sdeletion$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:100
creating service e2e-test-webhook in namespace webhook-9037
Unexpected error:
    <*errors.StatusError | 0xc0022ac6e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-9037/services\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post services)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "services",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-9037/services\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-9037/services\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post services)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:407
				
				Click to see stdout/stderrfrom junit_skew14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny pod and configmap creation 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\spod\sand\sconfigmap\screation$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc001b255e0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-2520/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-2520/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"webhook-2520\" for [{ServiceAccount  default webhook-2520}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "webhook-2520" for [{ServiceAccount  default webhook-2520}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-2520/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should deny crd creation 31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sdeny\scrd\screation$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:100
waiting for the deployment status valid%!(EXTRA string=gcr.io/kubernetes-e2e-test-images/webhook:1.15v1, string=sample-webhook-deployment, string=webhook-2526)
Unexpected error:
    <*errors.errorString | 0xc00154a490>: {
        s: "error waiting for deployment \"sample-webhook-deployment\" status to match expectation: Unauthorized",
    }
    error waiting for deployment "sample-webhook-deployment" status to match expectation: Unauthorized
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:384
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should honor timeout 1m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\shonor\stimeout$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:100
waiting for service webhook-2798/e2e-test-webhook have 1 endpoint
Unexpected error:
    <*errors.StatusError | 0xc001d2e140>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-2798/endpoints\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get endpoints)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "endpoints",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-2798/endpoints\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-2798/endpoints\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get endpoints)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:411
				
				Click to see stdout/stderrfrom junit_skew18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should mutate configmap 1m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\sconfigmap$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 08:41:35.400: Couldn't delete ns: "webhook-7952": an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-7952/persistentvolumeclaims\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-7952/persistentvolumeclaims\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001619140), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should mutate custom resource 1m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\scustom\sresource$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:177
registering custom resource webhook config webhook-5001 with namespace webhook-5001
Unexpected error:
    <*errors.StatusError | 0xc00201a000>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post mutatingwebhookconfigurations.admissionregistration.k8s.io)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "admissionregistration.k8s.io",
                Kind: "mutatingwebhookconfigurations",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/apis/admissionregistration.k8s.io/v1beta1/mutatingwebhookconfigurations\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post mutatingwebhookconfigurations.admissionregistration.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:1386
				
				Click to see stdout/stderrfrom junit_skew15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should mutate custom resource with different stored version 24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\scustom\sresource\swith\sdifferent\sstored\sversion$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001a86a00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-5208/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-5208/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-5208/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should mutate custom resource with pruning 24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\scustom\sresource\swith\spruning$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00199c6e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-4178/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-4178/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-4178/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should mutate pod and apply defaults after mutation 29s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\spod\sand\sapply\sdefaults\safter\smutation$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00201bf40>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-2688/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-2688/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-2688/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should not be able to mutate or prevent deletion of webhook configuration objects 17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\snot\sbe\sable\sto\smutate\sor\sprevent\sdeletion\sof\swebhook\sconfiguration\sobjects$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00249b040>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-983/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-983/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-983/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should unconditionally reject operations on fail closed webhook 36s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sunconditionally\sreject\soperations\son\sfail\sclosed\swebhook$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001c2ef00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-5769/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-5769/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-5769/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] 1m21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAggregator\sShould\sbe\sable\sto\ssupport\sthe\s1\.10\sSample\sAPI\sServer\susing\sthe\scurrent\sAggregator\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
creating apiservice v1alpha1.wardle.k8s.io with namespace aggregator-601
Unexpected error:
    <*errors.StatusError | 0xc0014fe320>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/apis/apiregistration.k8s.io/v1beta1/apiservices\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post apiservices.apiregistration.k8s.io)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "apiregistration.k8s.io",
                Kind: "apiservices",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/apis/apiregistration.k8s.io/v1beta1/apiservices\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/apis/apiregistration.k8s.io/v1beta1/apiservices\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post apiservices.apiregistration.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:345
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook Should be able to convert a non homogeneous list of CRs 56s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sCustomResourceConversionWebhook\sShould\sbe\sable\sto\sconvert\sa\snon\shomogeneous\slist\sof\sCRs$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
creating service e2e-test-crd-conversion-webhook in namespace crd-webhook-5219
Unexpected error:
    <*errors.StatusError | 0xc0020cdae0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/crd-webhook-5219/services\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post services)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "services",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/crd-webhook-5219/services\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/crd-webhook-5219/services\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post services)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:335