This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 696 failed / 99 succeeded
Started2020-02-04 19:42
Elapsed1h49m
Revision
Buildergke-prow-default-pool-cf4891d4-z214
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/67124c3b-a31a-415e-8c96-5216a15f0d85/targets/test'}}
pod5c399dff-4786-11ea-b8d7-32e01c04da64
resultstorehttps://source.cloud.google.com/results/invocations/67124c3b-a31a-415e-8c96-5216a15f0d85/targets/test
infra-commit7e8cd997a
job-versionv1.16.7-beta.0.23+0a70c2fa6d4642
master_os_image
node_os_imagecos-77-12371-89-0
pod5c399dff-4786-11ea-b8d7-32e01c04da64
revisionv1.16.7-beta.0.23+0a70c2fa6d4642

Test Failures


Cluster downgrade hpa-upgrade 15m51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\sdowngrade\shpa\-upgrade$'
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).test(0x7ea44e0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:90 +0x3e3
k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).Setup(0x7ea44e0, 0xc000822280)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:60 +0x1f7
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc001577d00, 0xc0034ab9e0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:395 +0x2c1
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0034ab9e0, 0xc0030a5c70)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] 1m2s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\spoststart\sexec\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:45:45.122: Couldn't delete ns: "container-lifecycle-hook-4719": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-4719\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces container-lifecycle-hook-4719) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-lifecycle-hook-4719\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces container-lifecycle-hook-4719)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001ea3200), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew23.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] 1m23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\spoststart\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:56:18.096: Couldn't delete ns: "container-lifecycle-hook-7149": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-7149/endpoints\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-lifecycle-hook-7149/endpoints\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc003276600), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] 39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sLifecycle\sHook\swhen\screate\sa\spod\swith\slifecycle\shook\sshould\sexecute\sprestop\shttp\shook\sproperly\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 20:43:14.939: Failed to delete pod "pod-with-prestop-http-hook": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-lifecycle-hook-9435/pods/pod-with-prestop-http-hook\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete pods pod-with-prestop-http-hook)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
				
				Click to see stdout/stderrfrom junit_skew03.xml

Find pod-with-prestop-http-hook mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] 18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sas\sempty\swhen\spod\ssucceeds\sand\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001c49ae0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-7498/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-7498/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-7498/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] 50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sfrom\sfile\swhen\spod\ssucceeds\sand\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 21:03:19.451: Couldn't delete ns: "container-runtime-7810": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-7810\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces container-runtime-7810) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-7810\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces container-runtime-7810)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0026db020), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] 18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sfrom\slog\soutput\sif\sTerminationMessagePolicy\sFallbackToLogsOnError\sis\sset\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 21:03:22.352: Couldn't delete ns: "container-runtime-6461": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-6461\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces container-runtime-6461) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-6461\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces container-runtime-6461)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0021a5920), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance] 10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\son\sterminated\scontainer\sshould\sreport\stermination\smessage\s\[LinuxOnly\]\sif\sTerminationMessagePath\sis\sset\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc00327d1c0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "rolebindings.rbac.authorization.k8s.io \"container-runtime-4876--e2e-test-privileged-psp\" is forbidden: user \"pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com\" (groups=[\"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"extensions\"], Resources:[\"podsecuritypolicies\"], ResourceNames:[\"e2e-test-privileged-psp\"], Verbs:[\"use\"]}",
                    Reason: "Forbidden",
                    Details: {
                        Name: "container-runtime-4876--e2e-test-privileged-psp",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: nil,
                        RetryAfterSeconds: 0,
                    },
                    Code: 403,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"container-runtime-4876\" for [{ServiceAccount  default container-runtime-4876}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "container-runtime-4876" for [{ServiceAccount  default container-runtime-4876}]: rolebindings.rbac.authorization.k8s.io "container-runtime-4876--e2e-test-privileged-psp" is forbidden: user "pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com" (groups=["system:authenticated"]) is attempting to grant RBAC permissions not currently held:
    {APIGroups:["extensions"], Resources:["podsecuritypolicies"], ResourceNames:["e2e-test-privileged-psp"], Verbs:["use"]}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [LinuxOnly] [NodeConformance] 49s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\sbe\sable\sto\spull\sfrom\sprivate\sregistry\swith\ssecret\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 21:07:12.258: Couldn't delete ns: "container-runtime-8702": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-8702/configmaps\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-8702/configmaps\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002241aa0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [LinuxOnly] [NodeConformance] 9.24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\sbe\sable\sto\spull\simage\sfrom\sdocker\shub\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc002253540>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-6612/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-6612/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-6612/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew05.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [LinuxOnly] [NodeConformance] 44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\sbe\sable\sto\spull\simage\sfrom\sgcr\.io\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 21:03:14.026: Couldn't delete ns: "container-runtime-7167": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/container-runtime-7167/ingresses\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/container-runtime-7167/ingresses\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0017bf2c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance] 9.38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\snot\sbe\sable\sto\spull\sfrom\sprivate\sregistry\swithout\ssecret\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0017a6460>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-145/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-145/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-145/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance] 20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\srunning\sa\scontainer\swith\sa\snew\simage\sshould\snot\sbe\sable\sto\spull\simage\sfrom\sinvalid\sregistry\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001821cc0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-8842/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-8842/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-8842/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] 53s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sContainer\sRuntime\sblackbox\stest\swhen\sstarting\sa\scontainer\sthat\sexits\sshould\srun\swith\sthe\sexpected\sstatus\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc001953d60>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-runtime-8535/pods\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-runtime-8535/pods\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-runtime-8535/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post pods)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:80
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] 15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\sbe\sable\sto\soverride\sthe\simage\'s\sdefault\sarguments\s\(docker\scmd\)\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001c06960>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/containers-2364/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/containers-2364/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/containers-2364/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew13.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] 50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\sbe\sable\sto\soverride\sthe\simage\'s\sdefault\scommand\s\(docker\sentrypoint\)\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:45:27.361: Couldn't delete ns: "containers-6134": an error on the server ("Internal Server Error: \"/api/v1/namespaces/containers-6134/services\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/containers-6134/services\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00273aa80), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew20.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] 54s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDocker\sContainers\sshould\sbe\sable\sto\soverride\sthe\simage\'s\sdefault\scommand\sand\sarguments\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 20:54:10.879: Failed to delete pod "client-containers-e32cd475-003a-4c1f-ad40-b9680e752ce2": an error on the server ("Internal Server Error: \"/api/v1/namespaces/containers-8959/pods/client-containers-e32cd475-003a-4c1f-ad40-b9680e752ce2\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete pods client-containers-e32cd475-003a-4c1f-ad40-b9680e752ce2)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:175
				
				Click to see stdout/stderrfrom junit_skew19.xml

Find client-containers-e32cd475-003a-4c1f-ad40-b9680e752ce2 mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] 33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\sinvoke\sinit\scontainers\son\sa\sRestartAlways\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc001ef71e0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "rolebindings.rbac.authorization.k8s.io \"init-container-8398--e2e-test-privileged-psp\" is forbidden: user \"pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com\" (groups=[\"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"extensions\"], Resources:[\"podsecuritypolicies\"], ResourceNames:[\"e2e-test-privileged-psp\"], Verbs:[\"use\"]}",
                    Reason: "Forbidden",
                    Details: {
                        Name: "init-container-8398--e2e-test-privileged-psp",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: nil,
                        RetryAfterSeconds: 0,
                    },
                    Code: 403,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"init-container-8398\" for [{ServiceAccount  default init-container-8398}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "init-container-8398" for [{ServiceAccount  default init-container-8398}]: rolebindings.rbac.authorization.k8s.io "init-container-8398--e2e-test-privileged-psp" is forbidden: user "pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com" (groups=["system:authenticated"]) is attempting to grant RBAC permissions not currently held:
    {APIGroups:["extensions"], Resources:["podsecuritypolicies"], ResourceNames:["e2e-test-privileged-psp"], Verbs:["use"]}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] 48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\sinvoke\sinit\scontainers\son\sa\sRestartNever\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:56:14.873: Couldn't delete ns: "init-container-4690": an error on the server ("Internal Server Error: \"/apis/autoscaling/v1/namespaces/init-container-4690/horizontalpodautoscalers\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/autoscaling/v1/namespaces/init-container-4690/horizontalpodautoscalers\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002b914a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] 41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\snot\sstart\sapp\scontainers\sand\sfail\sthe\spod\sif\sinit\scontainers\sfail\son\sa\sRestartNever\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:54:16.288: Couldn't delete ns: "init-container-8472": an error on the server ("Internal Server Error: \"/apis/nodemanagement.gke.io/v1alpha1/namespaces/init-container-8472/updateinfos\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/nodemanagement.gke.io/v1alpha1/namespaces/init-container-8472/updateinfos\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0011ae840), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] 20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sInitContainer\s\[NodeConformance\]\sshould\snot\sstart\sapp\scontainers\sif\sinit\scontainers\sfail\son\sa\sRestartAlways\spod\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc002748820>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/init-container-5915/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/init-container-5915/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"init-container-5915\" for [{ServiceAccount  default init-container-5915}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "init-container-5915" for [{ServiceAccount  default init-container-5915}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/init-container-5915/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew11.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] 9.34s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\sPod\swith\shostAliases\sshould\swrite\sentries\sto\s\/etc\/hosts\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00178c280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-test-2842/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/kubelet-test-2842/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-test-2842/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] 1m31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sin\sa\spod\sshould\sprint\sthe\soutput\sto\slogs\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:51:08.406: Couldn't delete ns: "kubelet-test-970": an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-test-970/secrets\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-test-970/secrets\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc003edab40), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] 44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\sbe\spossible\sto\sdelete\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:56:17.748: Couldn't delete ns: "kubelet-test-2791": an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-test-2791/limitranges\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-test-2791/limitranges\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001d085a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew13.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] 18s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\shave\san\sterminated\sreason\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc0035192c0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/kubelet-test-196/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/kubelet-test-196/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"kubelet-test-196\" for [{ServiceAccount  default kubelet-test-196}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "kubelet-test-196" for [{ServiceAccount  default kubelet-test-196}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/kubelet-test-196/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] 9.14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sread\sonly\sbusybox\scontainer\sshould\snot\swrite\sto\sroot\sfilesystem\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001b8f4a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/kubelet-test-2142/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/kubelet-test-2142/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/kubelet-test-2142/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] 1m14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubeletManagedEtcHosts\sshould\stest\skubelet\smanaged\s\/etc\/hosts\sfile\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
failed to execute command in pod test-host-network-pod, container busybox-1: unable to upgrade connection: Internal Server Error: "/api/v1/namespaces/e2e-kubelet-etc-hosts-7871/pods/test-host-network-pod/exec?command=cat&amp;command=%2Fetc%2Fhosts&amp;container=busybox-1&amp;container=busybox-1&amp;stderr=true&amp;stdout=true": the server has received too many requests and has asked us to try again later
Unexpected error:
    <*errors.errorString | 0xc0022d0400>: {
        s: "unable to upgrade connection: Internal Server Error: \"/api/v1/namespaces/e2e-kubelet-etc-hosts-7871/pods/test-host-network-pod/exec?command=cat&amp;command=%2Fetc%2Fhosts&amp;container=busybox-1&amp;container=busybox-1&amp;stderr=true&amp;stdout=true\": the server has received too many requests and has asked us to try again later",
    }
    unable to upgrade connection: Internal Server Error: "/api/v1/namespaces/e2e-kubelet-etc-hosts-7871/pods/test-host-network-pod/exec?command=cat&amp;command=%2Fetc%2Fhosts&amp;container=busybox-1&amp;container=busybox-1&amp;stderr=true&amp;stdout=true": the server has received too many requests and has asked us to try again later
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/exec_util.go:102
				
				Click to see stdout/stderrfrom junit_skew13.xml

Find test-host-network-pod, mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace 13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sNodeLease\swhen\sthe\sNodeLease\sfeature\sis\senabled\sthe\skubelet\sshould\screate\sand\supdate\sa\slease\sin\sthe\skube\-node\-lease\snamespace$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:54:16.406: Couldn't delete ns: "node-lease-test-7724": an error on the server ("Internal Server Error: \"/api/v1/namespaces/node-lease-test-7724/podtemplates\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/node-lease-test-7724/podtemplates\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00256e0c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently 54s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sNodeLease\swhen\sthe\sNodeLease\sfeature\sis\senabled\sthe\skubelet\sshould\sreport\snode\sstatus\sinfrequently$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:53:33.861: Couldn't delete ns: "node-lease-test-7290": an error on the server ("Internal Server Error: \"/api/v1/namespaces/node-lease-test-7290\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces node-lease-test-7290) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/node-lease-test-7290\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces node-lease-test-7290)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002c4e780), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] 23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sallow\sactiveDeadlineSeconds\sto\sbe\supdated\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:40:15.974: Couldn't delete ns: "pods-8989": an error on the server ("Internal Server Error: \"/apis/coordination.k8s.io/v1/namespaces/pods-8989/leases\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/coordination.k8s.io/v1/namespaces/pods-8989/leases\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002b114a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] 1m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sbe\ssubmitted\sand\sremoved\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:43:42.660: Couldn't delete ns: "pods-4314": an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-4314/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-4314/pods\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00187a000), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew05.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should be updated [NodeConformance] [Conformance] 21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sbe\supdated\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc001b2d860>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/pods-1110/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/pods-1110/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"pods-1110\" for [{ServiceAccount  default pods-1110}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "pods-1110" for [{ServiceAccount  default pods-1110}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/pods-1110/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew25.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] 31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\scontain\senvironment\svariables\sfor\sservices\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:47:28.739: Couldn't delete ns: "pods-1330": an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-1330\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces pods-1330) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-1330\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces pods-1330)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00350bec0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew11.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] 33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\sget\sa\shost\sIP\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:54:33.324: Couldn't delete ns: "pods-4813": an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-4813/configmaps\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-4813/configmaps\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002ba97a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate] 25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\spod\sreadiness\sgates\s\[NodeFeature\:PodReadinessGate\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:775
Unexpected error:
    <*errors.StatusError | 0xc0011fae60>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-1478/pods/pod-ready/status\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (patch pods pod-ready)",
            Reason: "InternalError",
            Details: {
                Name: "pod-ready",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/pods-1478/pods/pod-ready/status\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-1478/pods/pod-ready/status\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (patch pods pod-ready)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:818
				
				Click to see stdout/stderrfrom junit_skew13.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] 59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\sremote\scommand\sexecution\sover\swebsockets\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Feb  4 21:03:24.637: Failed to open websocket to wss://35.224.215.13/api/v1/namespaces/pods-8937/pods/pod-exec-websocket-65219d7f-b862-42fa-bcdb-91b1973cfae1/exec?command=echo&command=remote+execution+test&container=main&stderr=1&stdout=1: websocket.Dial wss://35.224.215.13/api/v1/namespaces/pods-8937/pods/pod-exec-websocket-65219d7f-b862-42fa-bcdb-91b1973cfae1/exec?command=echo&command=remote+execution+test&container=main&stderr=1&stdout=1: bad status
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:577
				
				Click to see stdout/stderrfrom junit_skew04.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] 59s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPods\sshould\ssupport\sretrieving\slogs\sfrom\sthe\scontainer\sover\swebsockets\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 21:00:27.402: Couldn't delete ns: "pods-6790": an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-6790/podtemplates\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-6790/podtemplates\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002d88960), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly] 11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sPrivilegedPod\s\[NodeConformance\]\sshould\senable\sprivileged\scommands\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc001eb3f40>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/e2e-privileged-pod-6461/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/e2e-privileged-pod-6461/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/e2e-privileged-pod-6461/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] 4m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\s\/healthz\shttp\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:51:15.499: Couldn't delete ns: "container-probe-2014": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-2014\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces container-probe-2014) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-2014\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces container-probe-2014)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0019ee120), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew13.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] 2m47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
getting pod 
Unexpected error:
    <*errors.StatusError | 0xc00259de00>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-1853/pods/busybox-559ac1dc-f3d5-4287-a521-a81730a6781a\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods busybox-559ac1dc-f3d5-4287-a521-a81730a6781a)",
            Reason: "InternalError",
            Details: {
                Name: "busybox-559ac1dc-f3d5-4287-a521-a81730a6781a",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-1853/pods/busybox-559ac1dc-f3d5-4287-a521-a81730a6781a\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-1853/pods/busybox-559ac1dc-f3d5-4287-a521-a81730a6781a\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods busybox-559ac1dc-f3d5-4287-a521-a81730a6781a)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:439
				
				Click to see stdout/stderrfrom junit_skew06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe 26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\snon\-local\sredirect\shttp\sliveness\sprobe$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00215b360>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-9981/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/container-probe-9981/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-9981/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew16.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] 4m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\s\*not\*\sbe\srestarted\swith\sa\stcp\:8080\sliveness\sprobe\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:56:12.300: Couldn't delete ns: "container-probe-4526": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-4526/serviceaccounts\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-4526/serviceaccounts\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc003fb6f00), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] 38s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\s\/healthz\shttp\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:40:30.712: Couldn't delete ns: "container-probe-2222": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-2222\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces container-probe-2222) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-2222\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces container-probe-2222)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001fa2480), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] 20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\sexec\s\"cat\s\/tmp\/health\"\sliveness\sprobe\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc001fd6440>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "rolebindings.rbac.authorization.k8s.io \"container-probe-2981--e2e-test-privileged-psp\" is forbidden: user \"pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com\" (groups=[\"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"extensions\"], Resources:[\"podsecuritypolicies\"], ResourceNames:[\"e2e-test-privileged-psp\"], Verbs:[\"use\"]}",
                    Reason: "Forbidden",
                    Details: {
                        Name: "container-probe-2981--e2e-test-privileged-psp",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: nil,
                        RetryAfterSeconds: 0,
                    },
                    Code: 403,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"container-probe-2981\" for [{ServiceAccount  default container-probe-2981}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "container-probe-2981" for [{ServiceAccount  default container-probe-2981}]: rolebindings.rbac.authorization.k8s.io "container-probe-2981--e2e-test-privileged-psp" is forbidden: user "pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com" (groups=["system:authenticated"]) is attempting to grant RBAC permissions not currently held:
    {APIGroups:["extensions"], Resources:["podsecuritypolicies"], ResourceNames:["e2e-test-privileged-psp"], Verbs:["use"]}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew23.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe 48s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\sshould\sbe\srestarted\swith\sa\slocal\sredirect\shttp\sliveness\sprobe$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 21:02:29.514: Couldn't delete ns: "container-probe-1500": an error on the server ("Internal Server Error: \"/api/v1/namespaces/container-probe-1500\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces container-probe-1500) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/container-probe-1500\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces container-probe-1500)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0017ee720), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] 39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sshould\snot\sbe\sready\sbefore\sinitial\sdelay\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.errorString | 0xc00027b8c0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] 1m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sProbing\scontainer\swith\sreadiness\sprobe\sthat\sfails\sshould\snever\sbe\sready\sand\snever\srestart\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:50:22.888: Couldn't delete ns: "container-probe-3051": an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1/namespaces/container-probe-3051/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1/namespaces/container-probe-3051/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002fe69c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID 37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\snot\srun\swith\san\sexplicit\sroot\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:49:28.369: Couldn't delete ns: "security-context-test-9275": an error on the server ("Internal Server Error: \"/apis/cloud.google.com/v1beta1/namespaces/security-context-test-9275/backendconfigs\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/cloud.google.com/v1beta1/namespaces/security-context-test-9275/backendconfigs\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002e79500), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID 51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\snot\srun\swithout\sa\sspecified\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:52:28.847: Couldn't delete ns: "security-context-test-3266": an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-3266/serviceaccounts\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-3266/serviceaccounts\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0023dc8a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew06.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID 1m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\srun\swith\san\sexplicit\snon\-root\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:123
Unexpected error:
    <*errors.errorString | 0xc0025784c0>: {
        s: "failed to get output for container \"explicit-nonroot-uid\" of pod \"explicit-nonroot-uid\"",
    }
    failed to get output for container "explicit-nonroot-uid" of pod "explicit-nonroot-uid"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:129
				
				Click to see stdout/stderrfrom junit_skew12.xml

Find explicit-nonroot-uid, mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID 43s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsNonRoot\sshould\srun\swith\san\simage\sspecified\suser\sID$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:141
Unexpected error:
    <*errors.errorString | 0xc002ac9c00>: {
        s: "failed to get output for container \"implicit-nonroot-uid\" of pod \"implicit-nonroot-uid\"",
    }
    failed to get output for container "implicit-nonroot-uid" of pod "implicit-nonroot-uid"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:147
				
				Click to see stdout/stderrfrom junit_skew25.xml

Find implicit-nonroot-uid, mentions in log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance] 41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\scontainer\swith\srunAsUser\sshould\srun\sthe\scontainer\swith\suid\s0\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:45:23.322: Couldn't delete ns: "security-context-test-6699": an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-6699/secrets\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-6699/secrets\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002093d40), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] 51s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\spod\swith\sprivileged\sshould\srun\sthe\scontainer\sas\sunprivileged\swhen\sfalse\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:52:22.944: Couldn't delete ns: "security-context-test-3746": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/security-context-test-3746/networkpolicies\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/security-context-test-3746/networkpolicies\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001b3b2c0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew13.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance] 58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\spod\swith\sreadOnlyRootFilesystem\sshould\srun\sthe\scontainer\swith\sreadonly\srootfs\swhen\sreadOnlyRootFilesystem\=true\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:45:32.545: Couldn't delete ns: "security-context-test-7611": an error on the server ("Internal Server Error: \"/apis/apps/v1/namespaces/security-context-test-7611/controllerrevisions\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/apps/v1/namespaces/security-context-test-7611/controllerrevisions\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0023b8f00), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] 33s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\sWhen\screating\sa\spod\swith\sreadOnlyRootFilesystem\sshould\srun\sthe\scontainer\swith\swritable\srootfs\swhen\sreadOnlyRootFilesystem\=false\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:42:16.465: Couldn't delete ns: "security-context-test-1781": an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1/namespaces/security-context-test-1781/roles\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1/namespaces/security-context-test-1781/roles\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0031d0c00), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance] 26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\snot\sexplicitly\sset\sand\suid\s\!\=\s0\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc000a599a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-test-2105/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/security-context-test-2105/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-test-2105/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance] 27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\sallow\sprivilege\sescalation\swhen\strue\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc0026769e0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-7092/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-7092/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"security-context-test-7092\" for [{ServiceAccount  default security-context-test-7092}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "security-context-test-7092" for [{ServiceAccount  default security-context-test-7092}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-7092/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] 30s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSecurity\sContext\swhen\screating\scontainers\swith\sAllowPrivilegeEscalation\sshould\snot\sallow\sprivilege\sescalation\swhen\sfalse\s\[LinuxOnly\]\s\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc00240ca00>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-8984/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-8984/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"security-context-test-8984\" for [{ServiceAccount  default security-context-test-8984}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "security-context-test-8984" for [{ServiceAccount  default security-context-test-8984}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/security-context-test-8984/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Sysctls [NodeFeature:Sysctls] should reject invalid sysctls 10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSysctls\s\[NodeFeature\:Sysctls\]\sshould\sreject\sinvalid\ssysctls$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc002d4abe0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/sysctl-8844/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/sysctl-8844/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/sysctl-8844/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew02.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Sysctls [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted 31s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sSysctls\s\[NodeFeature\:Sysctls\]\sshould\ssupport\sunsafe\ssysctls\swhich\sare\sactually\swhitelisted$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:44:18.995: Couldn't delete ns: "sysctl-5469": an error on the server ("Internal Server Error: \"/apis/snapshot.storage.k8s.io/v1alpha1/namespaces/sysctl-5469/volumesnapshots\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/snapshot.storage.k8s.io/v1alpha1/namespaces/sysctl-5469/volumesnapshots\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc003e2c720), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] 32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\scomposing\senv\svars\sinto\snew\senv\svars\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc001b64fa0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/var-expansion-3550/pods\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/var-expansion-3550/pods\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/var-expansion-3550/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post pods)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:80
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\scontainer\'s\sargs\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:42:07.731: Couldn't delete ns: "var-expansion-4573": an error on the server ("Internal Server Error: \"/api/v1/namespaces/var-expansion-4573/events\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/var-expansion-4573/events\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002226f60), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew17.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] 24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sVariable\sExpansion\sshould\sallow\ssubstituting\svalues\sin\sa\scontainer\'s\scommand\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 21:08:10.877: Couldn't delete ns: "var-expansion-1618": an error on the server ("Internal Server Error: \"/apis/policy/v1beta1/namespaces/var-expansion-1618/poddisruptionbudgets\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/policy/v1beta1/namespaces/var-expansion-1618/poddisruptionbudgets\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002076ba0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined 42s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sAppArmor\sload\sAppArmor\sprofiles\scan\sdisable\san\sAppArmor\sprofile\,\susing\sunconfined$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:52:11.561: Couldn't delete ns: "apparmor-1018": an error on the server ("Internal Server Error: \"/api/v1/namespaces/apparmor-1018\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces apparmor-1018) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/apparmor-1018\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces apparmor-1018)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc00307b7a0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew19.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile 9.98s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sAppArmor\sload\sAppArmor\sprofiles\sshould\senforce\san\sAppArmor\sprofile$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc001f80b00>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/apparmor-397/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/apparmor-397/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"apparmor-397\" for [{ServiceAccount  default apparmor-397}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "apparmor-397" for [{ServiceAccount  default apparmor-397}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/apparmor-397/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew04.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] 26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sEvents\sshould\sbe\ssent\sby\skubelets\sand\sthe\sscheduler\sabout\spods\sscheduling\sand\srunning\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Unexpected error:
    <*errors.StatusError | 0xc001d67360>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/events-6730/events?fieldSelector=involvedObject.uid%3D57bfa9fc-2b11-414b-8b03-cede940798f7%2CinvolvedObject.namespace%3Devents-6730%2CinvolvedObject.kind%3DPod%2Csource%3Dkubelet\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get events)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "events",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/events-6730/events?fieldSelector=involvedObject.uid%3D57bfa9fc-2b11-414b-8b03-cede940798f7%2CinvolvedObject.namespace%3Devents-6730%2CinvolvedObject.kind%3DPod%2Csource%3Dkubelet\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/events-6730/events?fieldSelector=involvedObject.uid%3D57bfa9fc-2b11-414b-8b03-cede940798f7%2CinvolvedObject.namespace%3Devents-6730%2CinvolvedObject.kind%3DPod%2Csource%3Dkubelet\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get events)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/events.go:116
				
				Click to see stdout/stderrfrom junit_skew07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Mount propagation should propagate mounts to the host 1m23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sMount\spropagation\sshould\spropagate\smounts\sto\sthe\shost$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/mount_propagation.go:83
Unexpected error:
    <*errors.StatusError | 0xc0021c3860>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/mount-propagation-1087/pods/private\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods private)",
            Reason: "InternalError",
            Details: {
                Name: "private",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/mount-propagation-1087/pods/private\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/mount-propagation-1087/pods/private\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods private)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:113
				
				Click to see stdout/stderrfrom junit_skew03.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] should run without error 2m50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sNodeProblemDetector\s\[DisabledForLargeClusters\]\sshould\srun\swithout\serror$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:50:19.478: Couldn't delete ns: "node-problem-detector-7126": an error on the server ("Internal Server Error: \"/api/v1/namespaces/node-problem-detector-7126\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (delete namespaces node-problem-detector-7126) (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/node-problem-detector-7126\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (delete namespaces node-problem-detector-7126)", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc003735560), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] 50s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sDelete\sGrace\sPeriod\sshould\sbe\ssubmitted\sand\sremoved\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:45:31.619: Couldn't delete ns: "pods-3974": an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-3974/endpoints\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-3974/endpoints\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001a4e060), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew25.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] 15s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sPods\sSet\sQOS\sClass\sshould\sbe\ssubmitted\sand\sremoved\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc00038d5e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/pods-6758/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/pods-6758/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/pods-6758/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew08.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process 46s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPreStop\sgraceful\spod\sterminated\sshould\swait\suntil\spreStop\shook\scompletes\sthe\sprocess$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:180
Unexpected error:
    <*errors.StatusError | 0xc0017a3ae0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/prestop-8458/pods/pod-prestop-hook-300e3010-af8f-4717-923c-adce94c8fedd\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get pods pod-prestop-hook-300e3010-af8f-4717-923c-adce94c8fedd)",
            Reason: "InternalError",
            Details: {
                Name: "pod-prestop-hook-300e3010-af8f-4717-923c-adce94c8fedd",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/prestop-8458/pods/pod-prestop-hook-300e3010-af8f-4717-923c-adce94c8fedd\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/prestop-8458/pods/pod-prestop-hook-300e3010-af8f-4717-923c-adce94c8fedd\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get pods pod-prestop-hook-300e3010-af8f-4717-923c-adce94c8fedd)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:190
				
				Click to see stdout/stderrfrom junit_skew04.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] 1m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPreStop\sshould\scall\sprestop\swhen\skilling\sa\spod\s\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 21:01:50.112: Couldn't delete ns: "prestop-2918": an error on the server ("Internal Server Error: \"/apis/crd-publish-openapi-test-common-group.k8s.io/v4/namespaces/prestop-2918/e2e-test-crd-publish-openapi-5966-crds\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/crd-publish-openapi-test-common-group.k8s.io/v4/namespaces/prestop-2918/e2e-test-crd-publish-openapi-5966-crds\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001cd2f60), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew18.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] SSH should SSH to all nodes and run commands 26s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSSH\sshould\sSSH\sto\sall\snodes\sand\srun\scommands$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:54:10.033: Couldn't delete ns: "ssh-5158": an error on the server ("Internal Server Error: \"/apis/extensions/v1beta1/namespaces/ssh-5158/networkpolicies\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/extensions/v1beta1/namespaces/ssh-5158/networkpolicies\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc002eb2ea0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly] 13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\scontainer\.SecurityContext\.RunAsUser\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0011fae60>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-7131/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/security-context-7131/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-7131/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew13.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] 25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.RunAsUser\sAnd\spod\.Spec\.SecurityContext\.RunAsGroup\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:53:08.047: Couldn't delete ns: "security-context-1637": an error on the server ("Internal Server Error: \"/apis/scalingpolicy.kope.io/v1alpha1/namespaces/security-context-1637/scalingpolicies\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/scalingpolicy.kope.io/v1alpha1/namespaces/security-context-1637/scalingpolicies\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc0014d1620), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly] 16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.RunAsUser\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/security_context.go:74
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc0027ef720>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/security-context-7369/pods\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post pods)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/security-context-7369/pods\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/security-context-7369/pods\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post pods)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:80
				
				Click to see stdout/stderrfrom junit_skew21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly] 35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sSecurity\sContext\sshould\ssupport\spod\.Spec\.SecurityContext\.SupplementalGroups\s\[LinuxOnly\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 21:05:21.003: Couldn't delete ns: "security-context-4104": an error on the server ("Internal Server Error: \"/apis/kubectl-crd-test.k8s.io/v1/namespaces/security-context-4104/e2e-test-kubectl-9471-crds\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/kubectl-crd-test.k8s.io/v1/namespaces/security-context-4104/e2e-test-kubectl-9471-crds\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc003eda1e0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew09.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] crictl should be able to run crictl on the node 52s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\scrictl\sshould\sbe\sable\sto\srun\scrictl\son\sthe\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  4 20:51:08.268: Couldn't delete ns: "crictl-9830": an error on the server ("Internal Server Error: \"/apis/kubectl-crd-test.k8s.io/v1/namespaces/crictl-9830/e2e-test-kubectl-9471-crds\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"an error on the server (\"Internal Server Error: \\\"/apis/kubectl-crd-test.k8s.io/v1/namespaces/crictl-9830/e2e-test-kubectl-9471-crds\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding", Reason:"InternalError", Details:(*v1.StatusDetails)(0xc001defda0), Code:500}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_skew24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s. 2m47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\skubelet\s\[k8s\.io\]\s\[sig\-node\]\sClean\sup\spods\son\snode\skubelet\sshould\sbe\sable\sto\sdelete\s10\spods\sper\snode\sin\s1m0s\.$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:315
Unexpected error:
    <*errors.errorString | 0xc0002a38b0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet.go:350
				
				Click to see stdout/stderrfrom junit_skew14.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny attaching pod 32s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\sattaching\spod$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:100
waiting for service webhook-73/e2e-test-webhook have 1 endpoint
Unexpected error:
    <*errors.StatusError | 0xc001c14e60>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-73/endpoints\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get endpoints)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "endpoints",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-73/endpoints\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-73/endpoints\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get endpoints)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:411
				
				Click to see stdout/stderrfrom junit_skew11.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny custom resource creation and deletion 1m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\scustom\sresource\screation\sand\sdeletion$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:139
Feb  4 20:53:14.508: failed to create CustomResourceDefinition: an error on the server ("Internal Server Error: \"/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post customresourcedefinitions.apiextensions.k8s.io)
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/utils/crd/crd_util.go:92
				
				Click to see stdout/stderrfrom junit_skew21.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should be able to deny pod and configmap creation 1m20s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sbe\sable\sto\sdeny\spod\sand\sconfigmap\screation$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:127
Feb  4 21:00:39.475: expect error contains "the configmap contains unwanted key and value", got "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-6207/configmaps\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post configmaps)"
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:731
				
				Click to see stdout/stderrfrom junit_skew24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should deny crd creation 47s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sdeny\scrd\screation$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:100
creating service e2e-test-webhook in namespace webhook-1874
Unexpected error:
    <*errors.StatusError | 0xc002142960>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-1874/services\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post services)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "services",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-1874/services\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-1874/services\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post services)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:407
				
				Click to see stdout/stderrfrom junit_skew22.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should honor timeout 9.42s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\shonor\stimeout$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc0024e66e0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-8535/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-8535/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-8535/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew15.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should mutate configmap 13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\sconfigmap$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc0026a14e0>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-3647/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-3647/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"webhook-3647\" for [{ServiceAccount  default webhook-3647}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "webhook-3647" for [{ServiceAccount  default webhook-3647}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-3647/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew24.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should mutate custom resource 41s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\scustom\sresource$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:100
waiting for the deployment of image gcr.io/kubernetes-e2e-test-images/webhook:1.15v1 in sample-webhook-deployment in webhook-8700 to complete
Unexpected error:
    <*errors.errorString | 0xc0031bbe60>: {
        s: "deployment \"sample-webhook-deployment\" failed to create new replica set",
    }
    deployment "sample-webhook-deployment" failed to create new replica set
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:382
				
				Click to see stdout/stderrfrom junit_skew07.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should mutate custom resource with different stored version 22s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\scustom\sresource\swith\sdifferent\sstored\sversion$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc001f6dc00>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-9377/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-9377/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"webhook-9377\" for [{ServiceAccount  default webhook-9377}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "webhook-9377" for [{ServiceAccount  default webhook-9377}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-9377/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew10.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should mutate custom resource with pruning 1m3s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\scustom\sresource\swith\spruning$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:100
waiting for service webhook-1683/e2e-test-webhook have 1 endpoint
Unexpected error:
    <*errors.StatusError | 0xc002101ae0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-1683/endpoints\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get endpoints)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "endpoints",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-1683/endpoints\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-1683/endpoints\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get endpoints)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:411
				
				Click to see stdout/stderrfrom junit_skew02.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should mutate pod and apply defaults after mutation 9.23s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\smutate\spod\sand\sapply\sdefaults\safter\smutation$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc004145320>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "an error on the server (\"Internal Server Error: \\\"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-6477/rolebindings\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)",
                    Reason: "InternalError",
                    Details: {
                        Name: "",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: [
                            {
                                Type: "UnexpectedServerResponse",
                                Message: "Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-6477/rolebindings\": the server has received too many requests and has asked us to try again later",
                                Field: "",
                            },
                        ],
                        RetryAfterSeconds: 0,
                    },
                    Code: 500,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"webhook-6477\" for [{ServiceAccount  default webhook-6477}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "webhook-6477" for [{ServiceAccount  default webhook-6477}]: an error on the server ("Internal Server Error: \"/apis/rbac.authorization.k8s.io/v1beta1/namespaces/webhook-6477/rolebindings\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post rolebindings.rbac.authorization.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew12.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should not be able to mutate or prevent deletion of webhook configuration objects 17s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\snot\sbe\sable\sto\smutate\sor\sprevent\sdeletion\sof\swebhook\sconfiguration\sobjects$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.StatusError | 0xc000416460>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/api/v1/namespaces/webhook-6200/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (get serviceaccounts)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "serviceaccounts",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/api/v1/namespaces/webhook-6200/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/api/v1/namespaces/webhook-6200/serviceaccounts?fieldSelector=metadata.name%3Ddefault&amp;watch=true\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (get serviceaccounts)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:217
				
				Click to see stdout/stderrfrom junit_skew25.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook Should unconditionally reject operations on fail closed webhook 12s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAdmissionWebhook\sShould\sunconditionally\sreject\soperations\son\sfail\sclosed\swebhook$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
Unexpected error:
    <*errors.withStack | 0xc001d82260>: {
        error: {
            cause: {
                ErrStatus: {
                    TypeMeta: {Kind: "", APIVersion: ""},
                    ListMeta: {
                        SelfLink: "",
                        ResourceVersion: "",
                        Continue: "",
                        RemainingItemCount: nil,
                    },
                    Status: "Failure",
                    Message: "rolebindings.rbac.authorization.k8s.io \"webhook-184--e2e-test-privileged-psp\" is forbidden: user \"pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com\" (groups=[\"system:authenticated\"]) is attempting to grant RBAC permissions not currently held:\n{APIGroups:[\"extensions\"], Resources:[\"podsecuritypolicies\"], ResourceNames:[\"e2e-test-privileged-psp\"], Verbs:[\"use\"]}",
                    Reason: "Forbidden",
                    Details: {
                        Name: "webhook-184--e2e-test-privileged-psp",
                        Group: "rbac.authorization.k8s.io",
                        Kind: "rolebindings",
                        UID: "",
                        Causes: nil,
                        RetryAfterSeconds: 0,
                    },
                    Code: 403,
                },
            },
            msg: "binding ClusterRole/e2e-test-privileged-psp into \"webhook-184\" for [{ServiceAccount  default webhook-184}]",
        },
        stack: [0x15a53ee, 0x15eeda1, 0x15eed28, 0x15c416d, 0x15c297b, 0x7ac6dc, 0x7ac34f, 0x7ac774, 0x7b2441, 0x7b2064, 0x7b7acf, 0x7b75e4, 0x7b6e27, 0x7b948e, 0x7bbfb7, 0x7bbcfd, 0x36fd267, 0x370025b, 0x507960, 0x4607c1],
    }
    binding ClusterRole/e2e-test-privileged-psp into "webhook-184" for [{ServiceAccount  default webhook-184}]: rolebindings.rbac.authorization.k8s.io "webhook-184--e2e-test-privileged-psp" is forbidden: user "pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com" (groups=["system:authenticated"]) is attempting to grant RBAC permissions not currently held:
    {APIGroups:["extensions"], Resources:["podsecuritypolicies"], ResourceNames:["e2e-test-privileged-psp"], Verbs:["use"]}
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/psp_util.go:151
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] 1m0s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sAggregator\sShould\sbe\sable\sto\ssupport\sthe\s1\.10\sSample\sAPI\sServer\susing\sthe\scurrent\sAggregator\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
creating apiservice v1alpha1.wardle.k8s.io with namespace aggregator-8632
Unexpected error:
    <*errors.StatusError | 0xc00065cbe0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "an error on the server (\"Internal Server Error: \\\"/apis/apiregistration.k8s.io/v1beta1/apiservices\\\": the server has received too many requests and has asked us to try again later\") has prevented the request from succeeding (post apiservices.apiregistration.k8s.io)",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "apiregistration.k8s.io",
                Kind: "apiservices",
                UID: "",
                Causes: [
                    {
                        Type: "UnexpectedServerResponse",
                        Message: "Internal Server Error: \"/apis/apiregistration.k8s.io/v1beta1/apiservices\": the server has received too many requests and has asked us to try again later",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    an error on the server ("Internal Server Error: \"/apis/apiregistration.k8s.io/v1beta1/apiservices\": the server has received too many requests and has asked us to try again later") has prevented the request from succeeding (post apiservices.apiregistration.k8s.io)
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:345