This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 163 failed / 49 succeeded
Started2019-07-21 03:11
Elapsed6h55m
Revision
Buildergke-prow-ssd-pool-1a225945-gzk6
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/c3738c69-b0f9-4072-a05a-08d5fd8e2f96/targets/test'}}
pod2cbe76d4-ab65-11e9-b82b-365474bd0c86
resultstorehttps://source.cloud.google.com/results/invocations/c3738c69-b0f9-4072-a05a-08d5fd8e2f96/targets/test
infra-commitd3a08c3fa
job-versionv1.14.5-beta.0.1+7936da50c68f42
master_os_image
node_os_imagecos-u-73-11647-217-0
pod2cbe76d4-ab65-11e9-b82b-365474bd0c86
revisionv1.14.5-beta.0.1+7936da50c68f42

Test Failures


Cluster upgrade [sig-apps] deployment-upgrade 43m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-apps\]\sdeployment\-upgrade$'
Unexpected error:
    <*errors.errorString | 0xc0024e7a50>: {
        s: "error waiting for deployment \"dp\" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:\"Progressing\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63699275976, loc:(*time.Location)(0x7cca7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699275960, loc:(*time.Location)(0x7cca7a0)}}, Reason:\"NewReplicaSetAvailable\", Message:\"ReplicaSet \\\"dp-9fcb69c69\\\" has successfully progressed.\"}, v1.DeploymentCondition{Type:\"Available\", Status:\"False\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63699276789, loc:(*time.Location)(0x7cca7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699276789, loc:(*time.Location)(0x7cca7a0)}}, Reason:\"MinimumReplicasUnavailable\", Message:\"Deployment does not have minimum availability.\"}, v1.DeploymentCondition{Type:\"ReplicaFailure\", Status:\"True\", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63699276789, loc:(*time.Location)(0x7cca7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699276789, loc:(*time.Location)(0x7cca7a0)}}, Reason:\"FailedCreate\", Message:\"Internal error occurred: failed calling webhook \\\"gvisor.common-webhooks.networking.gke.io\\\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused\"}}, CollisionCount:(*int32)(nil)}",
    }
    error waiting for deployment "dp" status to match expectation: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63699275976, loc:(*time.Location)(0x7cca7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699275960, loc:(*time.Location)(0x7cca7a0)}}, Reason:"NewReplicaSetAvailable", Message:"ReplicaSet \"dp-9fcb69c69\" has successfully progressed."}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63699276789, loc:(*time.Location)(0x7cca7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699276789, loc:(*time.Location)(0x7cca7a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"ReplicaFailure", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63699276789, loc:(*time.Location)(0x7cca7a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63699276789, loc:(*time.Location)(0x7cca7a0)}}, Reason:"FailedCreate", Message:"Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused"}}, CollisionCount:(*int32)(nil)}
occurred

k8s.io/kubernetes/test/e2e/upgrades/apps.(*DeploymentUpgradeTest).Test(0x7cc97a0, 0xc000714500, 0xc002aa44e0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/deployments.go:157 +0x8ca
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc00218f680, 0xc0023060e0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0023060e0, 0xc002131df0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-apps] job-upgrade 38m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-apps\]\sjob\-upgrade$'
Unexpected error:
    <*errors.errorString | 0xc0024e7cc0>: {
        s: "job has 0 of 2 expected running pods: ",
    }
    job has 0 of 2 expected running pods: 
occurred

k8s.io/kubernetes/test/e2e/upgrades/apps.(*JobUpgradeTest).Test(0x7cc57c0, 0xc000714a00, 0xc002aa44e0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/job.go:58 +0xe4
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc00218f700, 0xc002306100)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc002306100, 0xc002131e00)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-apps] replicaset-upgrade 39m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-apps\]\sreplicaset\-upgrade$'
Unexpected error:
    <*errors.errorString | 0xc0025f5030>: {
        s: "replicaset \"rs\" never became ready",
    }
    replicaset "rs" never became ready
occurred

k8s.io/kubernetes/test/e2e/upgrades/apps.(*ReplicaSetUpgradeTest).Test(0x7cc0fe0, 0xc000741540, 0xc002aa44e0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apps/replicasets.go:98 +0x744
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc00218f5c0, 0xc002306080)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc002306080, 0xc002131dd0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-storage] [sig-api-machinery] configmap-upgrade 38m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-storage\]\s\[sig\-api\-machinery\]\sconfigmap\-upgrade$'
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc001f3edc0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred

k8s.io/kubernetes/test/e2e/framework.(*PodClient).Create(0xc001581800, 0xc002c24800, 0x34)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81 +0xe4
k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000714dc0, 0xc002c24800, 0x43dd5ab, 0x15, 0xc001dbfe58, 0x2, 0x2, 0x4557558, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1687 +0xb0
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc000714dc0, 0x43ef397, 0x18, 0xc002c24800, 0x0, 0xc000a80e58, 0x2, 0x2, 0x4557558)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1671 +0x1bb
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:469
k8s.io/kubernetes/test/e2e/upgrades.(*ConfigMapUpgradeTest).testPod(0x7cbc770, 0xc000714dc0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/configmaps.go:148 +0x9aa
k8s.io/kubernetes/test/e2e/upgrades.(*ConfigMapUpgradeTest).Test(0x7cbc770, 0xc000714dc0, 0xc002aa44e0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/configmaps.go:74 +0x76
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc00218f740, 0xc002306120)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc002306120, 0xc002131e10)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-storage] [sig-api-machinery] secret-upgrade 38m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-storage\]\s\[sig\-api\-machinery\]\ssecret\-upgrade$'
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc002040c80>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred

k8s.io/kubernetes/test/e2e/framework.(*PodClient).Create(0xc001581be0, 0xc002c25400, 0x31)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81 +0xe4
k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000741180, 0xc002c25400, 0x43cf145, 0x12, 0xc003649e58, 0x2, 0x2, 0x4557558, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1687 +0xb0
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc000741180, 0x43e40d5, 0x16, 0xc002c25400, 0x0, 0xc00009ae58, 0x2, 0x2, 0x4557558)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1671 +0x1bb
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:469
k8s.io/kubernetes/test/e2e/upgrades.(*SecretUpgradeTest).testPod(0x7cbc768, 0xc000741180)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/secrets.go:145 +0x9a7
k8s.io/kubernetes/test/e2e/upgrades.(*SecretUpgradeTest).Test(0x7cbc768, 0xc000741180, 0xc002aa44e0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/secrets.go:72 +0x76
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc00218f580, 0xc002306060)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc002306060, 0xc002131db0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade [sig-storage] persistent-volume-upgrade 38m39s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\s\[sig\-storage\]\spersistent\-volume\-upgrade$'
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc001f3e8c0>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred

k8s.io/kubernetes/test/e2e/framework.(*PodClient).Create(0xc001531280, 0xc0031d1000, 0x2a)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81 +0xe4
k8s.io/kubernetes/test/e2e/framework.(*Framework).MatchContainerOutput(0xc000715900, 0xc0031d1000, 0x43ad674, 0x9, 0xc001d4be98, 0x1, 0x1, 0x4557558, 0x0, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1687 +0xb0
k8s.io/kubernetes/test/e2e/framework.(*Framework).testContainerOutputMatcher(0xc000715900, 0x43c1308, 0xf, 0xc0031d1000, 0x0, 0xc000a81e98, 0x1, 0x1, 0x4557558)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1671 +0x1bb
k8s.io/kubernetes/test/e2e/framework.(*Framework).TestContainerOutput(...)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:469
k8s.io/kubernetes/test/e2e/upgrades/storage.(*PersistentVolumeUpgradeTest).testPod(0x7cc57e0, 0xc000715900, 0x441cd93, 0x20)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/storage/persistent_volumes.go:102 +0x118
k8s.io/kubernetes/test/e2e/upgrades/storage.(*PersistentVolumeUpgradeTest).Test(0x7cc57e0, 0xc000715900, 0xc002aa44e0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/storage/persistent_volumes.go:84 +0x8f
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc00218f7c0, 0xc002306160)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc002306160, 0xc002131e30)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade apparmor-upgrade 38m37s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\sapparmor\-upgrade$'
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc001f3f720>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred

k8s.io/kubernetes/test/e2e/framework.(*PodClient).Create(0xc002484b20, 0xc002c25c00, 0x43a2977)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81 +0xe4
k8s.io/kubernetes/test/e2e/common.CreateAppArmorTestPod(0xc0004be3c0, 0x100, 0x0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/apparmor.go:127 +0x7d7
k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).verifyNewPodSucceeds(0x7cbc780, 0xc0004be3c0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:97 +0x5c
k8s.io/kubernetes/test/e2e/upgrades.(*AppArmorUpgradeTest).Test(0x7cbc780, 0xc0004be3c0, 0xc002aa44e0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/apparmor.go:76 +0x72
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc00218f840, 0xc0023061a0)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc0023061a0, 0xc002131e50)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Cluster upgrade hpa-upgrade 54m27s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Cluster\supgrade\shpa\-upgrade$'
timeout waiting 15m0s for 3 replicas
Unexpected error:
    <*errors.errorString | 0xc0002b1cc0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred

k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).test(0x7cc1000)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:90 +0x3e3
k8s.io/kubernetes/test/e2e/upgrades.(*HPAUpgradeTest).Test(0x7cc1000, 0xc0007152c0, 0xc002aa44e0, 0x2)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/upgrades/horizontal_pod_autoscalers.go:68 +0x99
k8s.io/kubernetes/test/e2e/lifecycle.(*chaosMonkeyAdapter).Test(0xc00218f780, 0xc002306140)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:397 +0x309
k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do.func1(0xc002306140, 0xc002131e20)
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:90 +0x76
created by k8s.io/kubernetes/test/e2e/chaosmonkey.(*Chaosmonkey).Do
	/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/chaosmonkey/chaosmonkey.go:87 +0xa7
				from junit_upgradeupgrades.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars 7.62s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDownward\sAPI\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:EphemeralStorage\]\sDownward\sAPI\stests\sfor\slocal\sephemeral\sstorage\sshould\sprovide\scontainer\'s\slimits\.ephemeral\-storage\sand\srequests\.ephemeral\-storage\sas\senv\svars$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:243
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc002883360>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable 7.80s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sDownward\sAPI\s\[Serial\]\s\[Disruptive\]\s\[NodeFeature\:EphemeralStorage\]\sDownward\sAPI\stests\sfor\slocal\sephemeral\sstorage\sshould\sprovide\sdefault\slimits\.ephemeral\-storage\sfrom\snode\sallocatable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:271
Error creating Pod
Unexpected error:
    <*errors.StatusError | 0xc00042b680>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:81
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] EquivalenceCache [Serial] validates GeneralPredicates is properly invalidated when a pod is scheduled [Slow] 1m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sEquivalenceCache\s\[Serial\]\svalidates\sGeneralPredicates\sis\sproperly\sinvalidated\swhen\sa\spod\sis\sscheduled\s\[Slow\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:94
Unexpected error:
    <*errors.StatusError | 0xc005a63a40>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:669
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] EquivalenceCache [Serial] validates pod affinity works properly when new replica pod is scheduled 1m7s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sEquivalenceCache\s\[Serial\]\svalidates\spod\saffinity\sworks\sproperly\swhen\snew\sreplica\spod\sis\sscheduled$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:116
Unexpected error:
    <*errors.StatusError | 0xc00459e280>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:669
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] EquivalenceCache [Serial] validates pod anti-affinity works properly when new replica pod is scheduled 3m35s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sEquivalenceCache\s\[Serial\]\svalidates\spod\santi\-affinity\sworks\sproperly\swhen\snew\sreplica\spod\sis\sscheduled$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/equivalence_cache_predicates.go:179
Unexpected error:
    <*errors.errorString | 0xc00027fcd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:736
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] 14s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sNamespaces\s\[Serial\]\sshould\sensure\sthat\sall\spods\sare\sremoved\swhen\sa\snamespace\sis\sdeleted\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
failed to create pod test-pod in namespace: nsdeletetest-632
Unexpected error:
    <*errors.StatusError | 0xc00315ab40>: {
        ErrStatus: {
            TypeMeta: {Kind: "", APIVersion: ""},
            ListMeta: {
                SelfLink: "",
                ResourceVersion: "",
                Continue: "",
                RemainingItemCount: nil,
            },
            Status: "Failure",
            Message: "Internal error occurred: failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
            Reason: "InternalError",
            Details: {
                Name: "",
                Group: "",
                Kind: "",
                UID: "",
                Causes: [
                    {
                        Type: "",
                        Message: "failed calling webhook \"gvisor.common-webhooks.networking.gke.io\": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused",
                        Field: "",
                    },
                ],
                RetryAfterSeconds: 0,
            },
            Code: 500,
        },
    }
    Internal error occurred: failed calling webhook "gvisor.common-webhooks.networking.gke.io": Post https://localhost:5443/webhook/gvisor?timeout=30s: dial tcp [::1]:5443: connect: connection refused
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/namespace.go:111
				
				Click to see stdout/stderrfrom junit_skew01.xml

Find test-pod mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete 5m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDaemon\sset\s\[Serial\]\sshould\snot\supdate\spod\swhen\sspec\swas\supdated\sand\supdate\sstrategy\sis\sOnDelete$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:278
error waiting for daemon pod to start
Unexpected error:
    <*errors.errorString | 0xc00027fcd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:289
				
				Click to see stdout/stderrfrom junit_skew01.xml

Find to mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] 5m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDaemon\sset\s\[Serial\]\sshould\srun\sand\sstop\ssimple\sdaemon\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
error waiting for daemon pod to start
Unexpected error:
    <*errors.errorString | 0xc00027fcd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:131
				
				Click to see stdout/stderrfrom junit_skew01.xml

Find to mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] 5m10s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDaemon\sset\s\[Serial\]\sshould\supdate\spod\swhen\sspec\swas\supdated\sand\supdate\sstrategy\sis\sRollingUpdate\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
error waiting for daemon pod to start
Unexpected error:
    <*errors.errorString | 0xc00027fcd0>: {
        s: "timed out waiting for the condition",
    }
    timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:338
				
				Click to see stdout/stderrfrom junit_skew01.xml

Find to mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned 2m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sNetwork\sPartition\s\[Disruptive\]\s\[Slow\]\s\[k8s\.io\]\s\[Job\]\sshould\screate\snew\spods\swhen\snode\sis\spartitioned$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:426
Unexpected error:
    <*errors.errorString | 0xc00262c380>: {
        s: "Pod name network-partition: Gave up waiting 2m0s for 2 pods to come up",
    }
    Pod name network-partition: Gave up waiting 2m0s for 2 pods to come up
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:439
				
				Click to see stdout/stderrfrom junit_skew01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero 2m8s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sNetwork\sPartition\s\[Disruptive\]\s\[Slow\]\s\[k8s\.io\]\s\[ReplicationController\]\sshould\seagerly\screate\sreplacement\spod\sduring\snetwork\spartition\swhen\stermination\sgrace\sis\snon\-zero$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:305
Each pod should start running and responding
Unexpected error:
    <*errors.errorString | 0xc00248f0e0>: {
        s: "Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up",
    }
    Pod name my-hostname-net: Gave up waiting 2m0s for 3 pods to come up
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:317
				
				Click to see stdout/stderrfrom junit_skew01.xml

Find should mentions in log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster 15m13s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sNetwork\sPartition\s\[Disruptive\]\s\[Slow\]\s\[k8s\.io\]\s\[ReplicationController\]\sshould\srecreate\spods\sscheduled\son\sthe\sunreachable\snode\sAND\sallow\sscheduling\sof\spods\son\sa\snode\safter\sit\srejoins\sthe\scluster$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:238
Each pod should start running and responding
Unexpected error:
    <*errors.errorString | 0xc002eb2210>: {
        s: "failed to wait for pods responding: timed out waiting for the condition",
    }
    failed to wait for pods responding: timed out waiting for the condition
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/network_partition.go:250