This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 355 failed / 80 succeeded
Started2019-07-20 12:24
Elapsed11h31m
Revision
Buildergke-prow-ssd-pool-1a225945-sw6v
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/ba09155f-be10-490c-a0ff-2ad6ef85fd75/targets/test'}}
pod43aa2c8b-aae9-11e9-b82b-365474bd0c86
resultstorehttps://source.cloud.google.com/results/invocations/ba09155f-be10-490c-a0ff-2ad6ef85fd75/targets/test
infra-commita7f2c5488
job-versionv1.14.5-beta.0.1+7936da50c68f42
master_os_image
node_os_imagecos-u-73-11647-217-0
pod43aa2c8b-aae9-11e9-b82b-365474bd0c86
revisionv1.14.5-beta.0.1+7936da50c68f42

Test Failures


Cluster upgrade [sig-apps] daemonset-upgrade 29m57s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-apps\]\sdaemonset\-upgrade$'
Jul 20 13:03:19.180: expected DaemonSet pod to be running on all nodes, it was not

k8s.io/kubernetes/test/e2e/upgrades/apps.(*DaemonSetUpgradeTest).validateRunningDaemonSet(0x7cbc778, 0xc000b3d400)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/daemonsets.go:117 +0x1eb
k8s.io/kubernetes/test/e2e/upgrades/apps.(*DaemonSetUpgradeTest).Test(0x7cbc778, 0xc000b3d400, 0xc002bb25a0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/daemonsets.go:104 +0xa4
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc001b5e100, 0xc001fe8ba0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc001fe8ba0, 0xc002a85e60)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Find to mentions in log files | View test history on testgrid


Cluster upgrade [sig-apps] job-upgrade 29m57s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-apps\]\sjob\-upgrade$'
Unexpected error:
    <*errors.errorString | 0xc002e979e0>: {
        s: "job has 0 of 2 expected running pods: ",
    }
    job has 0 of 2 expected running pods: 
occurred

k8s.io/kubernetes/test/e2e/upgrades/apps.(*JobUpgradeTest).Test(0x7cc57c0, 0xc000b3c280, 0xc002bb25a0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/job.go:58 +0xe4
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc001b5e000, 0xc001fe8b20)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc001fe8b20, 0xc002a85e20)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-apps] replicaset-upgrade 30m57s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-apps\]\sreplicaset\-upgrade$'
Unexpected error:
    <*errors.errorString | 0xc002d10680>: {
        s: "replicaset \"rs\" never became ready",
    }
    replicaset "rs" never became ready
occurred

k8s.io/kubernetes/test/e2e/upgrades/apps.(*ReplicaSetUpgradeTest).Test(0x7cc0fe0, 0xc000a19400, 0xc002bb25a0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/replicasets.go:88 +0x4df
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc003165ec0, 0xc001fe8aa0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc001fe8aa0, 0xc002a85df0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade apparmor-upgrade 29m57s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\sapparmor\-upgrade$'
Should be able to get pod
Unexpected error:
    <*errors.StatusError | 0xc00288a640>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "pods \"test-apparmor-sp69r\" not found",
            Reason: "NotFound",
            Details: {
                Name: "test-apparmor-sp69r",
                Group: "",
                Kind: "pods",
                UID: "",
                Causes: nil,
                RetryAfterSeconds: 0,
            },
            Code: 404,
        },
    }
    pods "test-apparmor-sp69r" not found
occurred

k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).verifyPodStillUp(0x7cbc780, 0xc000b3d7c0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:89 +0x156
k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).Test(0x7cbc780, 0xc000b3d7c0, 0xc002bb25a0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:73 +0x5a
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc001b5e140, 0xc001fe8bc0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc001fe8bc0, 0xc002a85e70)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade hpa-upgrade 45m21s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\shpa\-upgrade$'
timeout waiting 15m0s for 1 replicas
Unexpected error:
    <*errors.errorString | 0xc00027dcd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).test(0x7cc1000)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:85 +0x213
k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).Test(0x7cc1000, 0xc000b3ca00, 0xc002bb25a0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:68 +0x99
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc001b5e080, 0xc001fe8b60)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc001fe8b60, 0xc002a85e40)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade] 1h18m

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\scluster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:ClusterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:136
Unexpected error:
    <*errors.errorString | 0xc002e979e0>: {
        s: "job has 0 of 2 expected running pods: ",
    }
    job has 0 of 2 expected running pods: 
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/job.go:58
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Filter through log files | View test history on testgrid


Test 9h56m

error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Slow\]|\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\] --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8 --num-nodes=3 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


UpgradeTest 1h18m

error during kubetest --test --test_args=--ginkgo.focus=\[Feature:ClusterUpgrade\] --upgrade-image=gci --upgrade-target=ci-cross/latest --num-nodes=3 --report-dir=/workspace/_artifacts --disable-log-dump=true --report-prefix=upgrade --check-version-skew=false: exit status 1
				from junit_runner.xml

Filter through log files | View test history on testgrid


[k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars 7.42s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Downward\sAPI\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:EphemeralStorage\]\sDownward\sAPI\stests\sfor\slocal\sephemeral\sstorage\sshould\sprovide\scontainer\'s\slimits\.ephemeral\-storage\sand\srequests\.ephemeral\-storage\sas\senv\svars$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:251
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc0030ac120>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:77
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable 7.54s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Downward\sAPI\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:EphemeralStorage\]\sDownward\sAPI\stests\sfor\slocal\sephemeral\sstorage\sshould\sprovide\sdefault\slimits\.ephemeral\-storage\sfrom\snode\sallocatable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:279
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc0034ec5a0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:77
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[k8s.io] EquivalenceCache [Serial] validates GeneralPredicates is properly invalidated when a pod is scheduled [Slow] 1m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=EquivalenceCache\s\[Serial\]\svalidates\sGeneralPredicates\sis\sproperly\sinvalidated\swhen\sa\spod\sis\sscheduled\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:85
Unexpected error:
    <*errors.StatusError | 0xc005d30480>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:647
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[k8s.io] EquivalenceCache [Serial] validates pod affinity works properly when new replica pod is scheduled 3m25s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=EquivalenceCache\s\[Serial\]\svalidates\spod\saffinity\sworks\sproperly\swhen\snew\sreplica\spod\sis\sscheduled$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:107
Unexpected error:
    <*errors.errorString | 0xc0002bb3e0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:714
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[k8s.io] EquivalenceCache [Serial] validates pod anti-affinity works properly when new replica pod is scheduled 4m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=EquivalenceCache\s\[Serial\]\svalidates\spod\santi\-affinity\sworks\sproperly\swhen\snew\sreplica\spod\sis\sscheduled$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:170
Unexpected error:
    <*errors.errorString | 0xc004abdda0>: {
        s: "Only 0 pods started out of 2",
    }
    Only 0 pods started out of 2
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:801
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance] 11s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Pods\sshould\scap\sback\-off\sat\sMaxContainerBackOff\s\[Slow\]\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:691
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc003989170>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:77
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance] 7.52s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Pods\sshould\shave\stheir\sauto\-restart\sback\-off\stimer\sreset\son\simage\supdate\s\[Slow\]\[NodeConformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:650
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc003a23d40>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {SelfLink: "", ResourceVersion: "", Continue: ""},
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:77
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


[k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking resource tracking for 0 pods per node 20m9s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-node\]\sKubelet\s\[Serial\]\s\[Slow\]\s\[k8s\.io\]\s\[sig\-node\]\sregular\sresource\susage\stracking\sresource\stracking\sfor\s0\spods\sper\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:263
Jul 20 18:15:56.251: Memory usage exceeding limits:
 node gke-bootstrap-e2e-default-pool-ebefc061-4zsv:
 container "runtime": expected RSS memory (MB) < 131072000; got 159129600
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/kubelet_perf.go:155