This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 41 failed / 448 succeeded
Started2020-02-05 12:44
Elapsed15h16m
Revision
Buildergke-prow-default-pool-cf4891d4-0178
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/e3edcb03-27f8-49f2-acbc-36b285848f0f/targets/test'}}
pod1e60d58f-4815-11ea-996d-0a03f2419e8d
resultstorehttps://source.cloud.google.com/results/invocations/e3edcb03-27f8-49f2-acbc-36b285848f0f/targets/test
infra-commit656133e91
job-versionv1.15.10-beta.0.17+2e0c2c47211680
master_os_imagecos-73-11647-163-0
node_os_imagecos-73-11647-163-0
pod1e60d58f-4815-11ea-996d-0a03f2419e8d
revisionv1.15.10-beta.0.17+2e0c2c47211680

Test Failures


Kubernetes e2e suite AfterSuite 0.00s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\sAfterSuite$'
_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/e2e.go:164
Feb  6 03:54:42.003: Couldn't delete ns: "sched-priority-8010": namespace sched-priority-8010 was not deleted with limit: timed out waiting for the condition, pods remaining: 1 (&errors.errorString{s:"namespace sched-priority-8010 was not deleted with limit: timed out waiting for the condition, pods remaining: 1"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				from junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking resource tracking for 100 pods per node 57m24s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sKubelet\s\[Serial\]\s\[Slow\]\s\[k8s\.io\]\s\[sig\-node\]\sregular\sresource\susage\stracking\sresource\stracking\sfor\s100\spods\sper\snode$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Feb  6 00:49:06.457: Couldn't delete ns: "kubelet-perf-82": namespace kubelet-perf-82 was not deleted with limit: timed out waiting for the condition, pods remaining: 91 (&errors.errorString{s:"namespace kubelet-perf-82 was not deleted with limit: timed out waiting for the condition, pods remaining: 91"})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete 31m44s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-apps\]\sDaemon\sset\s\[Serial\]\sshould\snot\supdate\spod\swhen\sspec\swas\supdated\sand\supdate\sstrategy\sis\sOnDelete$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:278
Unexpected error:
    <*errors.errorString | 0xc0027732c0>: {
        s: "number of unavailable pods: 1 is greater than maxUnavailable: 0",
    }
    number of unavailable pods: 1 is greater than maxUnavailable: 0
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:307
				
				Click to see stdout/stderrfrom junit_01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade] 22m4s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-cluster\-lifecycle\]\sUpgrade\s\[Feature\:Upgrade\]\smaster\supgrade\sshould\smaintain\sa\sfunctioning\scluster\s\[Feature\:MasterUpgrade\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:91
Feb  5 13:02:20.776: Unexpected error:
    <*errors.errorString | 0xc0030d9590>: {
        s: "error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.16.7-beta.0.23+0a70c2fa6d4642]; got error exit status 1, stdout \"Fetching the previously installed CoreDNS version\\n\\n***WARNING***\\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\\nbefore running this script:\\n\\n# example: pin to etcd v3.0.17\\nexport ETCD_IMAGE=3.0.17\\nexport ETCD_VERSION=3.0.17\\n\\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\\n\\n== Pre-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.17+2e0c2c47211680\\\"\\nname: \\\"bootstrap-e2e-minion-group-37zm\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.17+2e0c2c47211680\\\"\\nname: \\\"bootstrap-e2e-minion-group-9v9x\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.17+2e0c2c47211680\\\"\\nname: \\\"bootstrap-e2e-minion-group-b795\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.17+2e0c2c47211680\\\"\\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.16.7-beta.0.23+0a70c2fa6d4642/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\\n== Upgrading master environment variables. ==\\n== Waiting for new master to respond to API requests ==\\n......................== Done ==\\nWaiting for CoreDNS to update\\nFetching the latest installed CoreDNS version\\n== Downloading the CoreDNS migration tool ==\\n== Upgrading the CoreDNS ConfigMap ==\\nconfigmap/coredns configured\\n== The CoreDNS Config has been updated ==\\n== Validating cluster post-upgrade ==\\nValidating gce cluster, MULTIZONE=\\nFound 4 node(s).\\nNAME                              STATUS                     ROLES    AGE   VERSION\\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   10m   v1.16.7-beta.0.23+0a70c2fa6d4642\\nbootstrap-e2e-minion-group-37zm   Ready                      <none>   10m   v1.15.10-beta.0.17+2e0c2c47211680\\nbootstrap-e2e-minion-group-9v9x   Ready                      <none>   10m   v1.15.10-beta.0.17+2e0c2c47211680\\nbootstrap-e2e-minion-group-b795   Ready                      <none>   10m   v1.15.10-beta.0.17+2e0c2c47211680\\nValidate output:\\nNAME                 AGE\\netcd-0               <unknown>\\nscheduler            <unknown>\\ncontroller-manager   <unknown>\\netcd-1               <unknown>\\n\\x1b[0;32mCluster validation succeeded\\x1b[0m\\n== Post-Upgrade Node OS and Kubelet Versions ==\\nname: \\\"bootstrap-e2e-master\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.16.7-beta.0.23+0a70c2fa6d4642\\\"\\nname: \\\"bootstrap-e2e-minion-group-37zm\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.17+2e0c2c47211680\\\"\\nname: \\\"bootstrap-e2e-minion-group-9v9x\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.17+2e0c2c47211680\\\"\\nname: \\\"bootstrap-e2e-minion-group-b795\\\", osImage: \\\"Container-Optimized OS from Google\\\", kubeletVersion: \\\"v1.15.10-beta.0.17+2e0c2c47211680\\\"\\n\", stderr \"Project: e2e-gce-gci-ci-slow-1-5\\nNetwork Project: e2e-gce-gci-ci-slow-1-5\\nZone: us-west1-b\\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\\nNODE_NAMES=bootstrap-e2e-minion-group-37zm bootstrap-e2e-minion-group-9v9x bootstrap-e2e-minion-group-b795\\nTrying to find master named 'bootstrap-e2e-master'\\nLooking for address 'bootstrap-e2e-master-ip'\\nUsing master: bootstrap-e2e-master (external IP: 35.197.26.204; internal IP: (not set))\\nDeleted [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-slow-1-5/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\\nCreated [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-slow-1-5/zones/us-west1-b/instances/bootstrap-e2e-master].\\nWARNING: Some requests generated warnings:\\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\\n - The resource 'projects/cos-cloud/global/images/cos-73-11647-163-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-73-11647-182-0'.\\n\\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS\\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.197.26.204  RUNNING\\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\\nProject: e2e-gce-gci-ci-slow-1-5\\nNetwork Project: e2e-gce-gci-ci-slow-1-5\\nZone: us-west1-b\\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\\n\"",
    }
    error running /workspace/kubernetes_skew/cluster/gce/upgrade.sh [-M v1.16.7-beta.0.23+0a70c2fa6d4642]; got error exit status 1, stdout "Fetching the previously installed CoreDNS version\n\n***WARNING***\nUpgrading Kubernetes with this script might result in an upgrade to a new etcd version.\nSome etcd version upgrades, such as 3.0.x to 3.1.x, DO NOT offer a downgrade path.\nTo pin the etcd version to your current one (e.g. v3.0.17), set the following variables\nbefore running this script:\n\n# example: pin to etcd v3.0.17\nexport ETCD_IMAGE=3.0.17\nexport ETCD_VERSION=3.0.17\n\nAlternatively, if you choose to allow an etcd upgrade that doesn't support downgrade,\nyou might still be able to downgrade Kubernetes by pinning to the newer etcd version.\nIn all cases, it is strongly recommended to have an etcd backup before upgrading.\n\n== Pre-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.17+2e0c2c47211680\"\nname: \"bootstrap-e2e-minion-group-37zm\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.17+2e0c2c47211680\"\nname: \"bootstrap-e2e-minion-group-9v9x\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.17+2e0c2c47211680\"\nname: \"bootstrap-e2e-minion-group-b795\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.17+2e0c2c47211680\"\nFound subnet for region us-west1 in network bootstrap-e2e: bootstrap-e2e\n== Upgrading master to 'https://storage.googleapis.com/kubernetes-release-dev/ci/v1.16.7-beta.0.23+0a70c2fa6d4642/kubernetes-server-linux-amd64.tar.gz'. Do not interrupt, deleting master instance. ==\n== Upgrading master environment variables. ==\n== Waiting for new master to respond to API requests ==\n......................== Done ==\nWaiting for CoreDNS to update\nFetching the latest installed CoreDNS version\n== Downloading the CoreDNS migration tool ==\n== Upgrading the CoreDNS ConfigMap ==\nconfigmap/coredns configured\n== The CoreDNS Config has been updated ==\n== Validating cluster post-upgrade ==\nValidating gce cluster, MULTIZONE=\nFound 4 node(s).\nNAME                              STATUS                     ROLES    AGE   VERSION\nbootstrap-e2e-master              Ready,SchedulingDisabled   <none>   10m   v1.16.7-beta.0.23+0a70c2fa6d4642\nbootstrap-e2e-minion-group-37zm   Ready                      <none>   10m   v1.15.10-beta.0.17+2e0c2c47211680\nbootstrap-e2e-minion-group-9v9x   Ready                      <none>   10m   v1.15.10-beta.0.17+2e0c2c47211680\nbootstrap-e2e-minion-group-b795   Ready                      <none>   10m   v1.15.10-beta.0.17+2e0c2c47211680\nValidate output:\nNAME                 AGE\netcd-0               <unknown>\nscheduler            <unknown>\ncontroller-manager   <unknown>\netcd-1               <unknown>\n\x1b[0;32mCluster validation succeeded\x1b[0m\n== Post-Upgrade Node OS and Kubelet Versions ==\nname: \"bootstrap-e2e-master\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.16.7-beta.0.23+0a70c2fa6d4642\"\nname: \"bootstrap-e2e-minion-group-37zm\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.17+2e0c2c47211680\"\nname: \"bootstrap-e2e-minion-group-9v9x\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.17+2e0c2c47211680\"\nname: \"bootstrap-e2e-minion-group-b795\", osImage: \"Container-Optimized OS from Google\", kubeletVersion: \"v1.15.10-beta.0.17+2e0c2c47211680\"\n", stderr "Project: e2e-gce-gci-ci-slow-1-5\nNetwork Project: e2e-gce-gci-ci-slow-1-5\nZone: us-west1-b\nINSTANCE_GROUPS=bootstrap-e2e-minion-group\nNODE_NAMES=bootstrap-e2e-minion-group-37zm bootstrap-e2e-minion-group-9v9x bootstrap-e2e-minion-group-b795\nTrying to find master named 'bootstrap-e2e-master'\nLooking for address 'bootstrap-e2e-master-ip'\nUsing master: bootstrap-e2e-master (external IP: 35.197.26.204; internal IP: (not set))\nDeleted [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-slow-1-5/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: You have selected a disk size of under [200GB]. This may result in poor I/O performance. For more information, see: https://developers.google.com/compute/docs/disks#performance.\nCreated [https://www.googleapis.com/compute/v1/projects/e2e-gce-gci-ci-slow-1-5/zones/us-west1-b/instances/bootstrap-e2e-master].\nWARNING: Some requests generated warnings:\n - Disk size: '20 GB' is larger than image size: '10 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.\n - The resource 'projects/cos-cloud/global/images/cos-73-11647-163-0' is deprecated. A suggested replacement is 'projects/cos-cloud/global/images/cos-73-11647-182-0'.\n\nNAME                  ZONE        MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS\nbootstrap-e2e-master  us-west1-b  n1-standard-1               10.138.0.6   35.197.26.204  RUNNING\nWarning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply\nProject: e2e-gce-gci-ci-slow-1-5\nNetwork Project: e2e-gce-gci-ci-slow-1-5\nZone: us-west1-b\n/workspace/kubernetes_skew/cluster/gce/upgrade.sh: line 452: download_dir: unbound variable\n"
occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/lifecycle/cluster_upgrade.go:106
				
				Click to see stdout/stderrfrom junit_upgrade01.xml

Filter through log files | View test history on testgrid


Kubernetes e2e suite [sig-network] DNS configMap nameserver [IPv4] Change stubDomain should be able to change stubDomain configuration [Slow][Serial] 2m58s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-network\]\sDNS\sconfigMap\snameserver\s\[IPv4\]\sChange\sstubDomain\sshould\sbe\sable\sto\schange\sstubDomain\sconfiguration\s\[Slow\]\[Serial\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_configmap.go:488
Feb  5 14:54:02.818: dig result did not match: []string{";; connection timed out; no servers could be reached"} after 2m0s
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/dns_common.go:103