This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 26 succeeded
Started2020-02-15 16:11
Elapsed34m31s
Revision
Buildergke-prow-default-pool-cf4891d4-w60c
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/01cbe16d-1650-408f-834e-e68ca3139a27/targets/test'}}
podba9ef8f4-500d-11ea-9bea-16a0f55e352c
resultstorehttps://source.cloud.google.com/results/invocations/01cbe16d-1650-408f-834e-e68ca3139a27/targets/test
infra-commitf5dd3ee0e
podba9ef8f4-500d-11ea-9bea-16a0f55e352c
reposigs.k8s.io/gcp-compute-persistent-disk-csi-driver
repo-commitcbb21d5a05a614e3299ac96c23cf1019271e0024
repos{u'sigs.k8s.io/gcp-compute-persistent-disk-csi-driver': u'release-0.4.0'}

Test Failures


[sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Feature: gcePD-external] [Testpattern: Dynamic PV (ext3)] volumes should be mountable 6m16s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\spd\.csi\.storage\.gke\.io\]\[Feature\:\sgcePD\-external\]\s\[Testpattern\:\sDynamic\sPV\s\(ext3\)\]\svolumes\sshould\sbe\smountable$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:149
Persistent Volume pvc-a14b2dab-5010-11ea-b739-42010a800002 not deleted by dynamic provisioner
Expected error:
    <*errors.errorString | 0xc0008c2f40>: {
        s: "PersistentVolume pvc-a14b2dab-5010-11ea-b739-42010a800002 still exists within 5m0s",
    }
    PersistentVolume pvc-a14b2dab-5010-11ea-b739-42010a800002 still exists within 5m0s
not to have occurred
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:213
				
				Click to see stdout/stderrfrom junit_18.xml

Filter through log files | View test history on testgrid


Show 26 Passed Tests

Show 2134 Skipped Tests

Error lines from build-log.txt

... skipping 237 lines ...
W0215 16:28:35.441] Trying to find master named 'e2e-test-prow-master'
W0215 16:28:35.441] Looking for address 'e2e-test-prow-master-ip'
W0215 16:28:36.378] Using master: e2e-test-prow-master (external IP: 34.69.11.29)
I0215 16:28:36.478] Waiting up to 300 seconds for cluster initialization.
I0215 16:28:36.479] 
I0215 16:28:36.479]   This will continually check to see if the API for kubernetes is reachable.
I0215 16:28:36.479]   This may time out if there was some uncaught error during start up.
I0215 16:28:36.479] 
I0215 16:29:47.835] ................Kubernetes cluster created.
I0215 16:29:47.997] Cluster "k8s-boskos-gce-project-01_e2e-test-prow" set.
I0215 16:29:48.147] User "k8s-boskos-gce-project-01_e2e-test-prow" set.
I0215 16:29:48.292] Context "k8s-boskos-gce-project-01_e2e-test-prow" created.
I0215 16:29:48.445] Switched to context "k8s-boskos-gce-project-01_e2e-test-prow".
... skipping 20 lines ...
I0215 16:30:37.529] NAME                              STATUS                     ROLES    AGE   VERSION
I0215 16:30:37.530] e2e-test-prow-master              Ready,SchedulingDisabled   <none>   16s   v1.13.5
I0215 16:30:37.530] e2e-test-prow-minion-group-4qgb   Ready                      <none>   13s   v1.13.5
I0215 16:30:37.530] e2e-test-prow-minion-group-8xzw   Ready                      <none>   12s   v1.13.5
I0215 16:30:37.530] e2e-test-prow-minion-group-kdsn   Ready                      <none>   12s   v1.13.5
I0215 16:30:37.735] Validate output:
I0215 16:30:37.925] NAME                 STATUS    MESSAGE              ERROR
I0215 16:30:37.926] scheduler            Healthy   ok                   
I0215 16:30:37.926] controller-manager   Healthy   ok                   
I0215 16:30:37.926] etcd-0               Healthy   {"health": "true"}   
I0215 16:30:37.926] etcd-1               Healthy   {"health": "true"}   
I0215 16:30:37.933] Cluster validation succeeded
W0215 16:30:38.034] Done, listing cluster services:
... skipping 116 lines ...
W0215 16:30:54.491]   "details": {
W0215 16:30:54.491]     "name": "cloud-sa",
W0215 16:30:54.491]     "kind": "secrets"
W0215 16:30:54.492]   },
W0215 16:30:54.492]   "code": 404
W0215 16:30:54.492] }]
W0215 16:30:54.492] F0215 16:30:54.489050   64548 helpers.go:116] Error from server (NotFound): secrets "cloud-sa" not found
W0215 16:30:54.497] + /go/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/test/k8s-integration/src/k8s.io/kubernetes/cluster/kubectl.sh create secret generic cloud-sa -v=9 --from-file=/tmp/88b9f09f-5010-11ea-845b-86be8f9932e5/cloud-sa.json -n default
W0215 16:30:54.642] I0215 16:30:54.642253   64591 loader.go:359] Config loaded from file /root/.kube/config
W0215 16:30:54.643] I0215 16:30:54.643231   64591 loader.go:359] Config loaded from file /root/.kube/config
W0215 16:30:54.644] I0215 16:30:54.644613   64591 loader.go:359] Config loaded from file /root/.kube/config
W0215 16:30:54.653] I0215 16:30:54.650789   64591 request.go:942] Request Body: {"apiVersion":"v1","data":{"cloud-sa.json":"ewogICJ0eXBlIjogInNlcnZpY2VfYWNjb3VudCIsCiAgInByb2plY3RfaWQiOiAia3ViZXJuZXRlcy1qZW5raW5zLXB1bGwiLAogICJwcml2YXRlX2tleV9pZCI6ICI5ZGMxNDU3YzBiMjQ5ZWZlMTRmOTkzMzM3OTRkMDA2MTNiNzRkMTE2IiwKICAicHJpdmF0ZV9rZXkiOiAiLS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tXG5NSUlFdlFJQkFEQU5CZ2txaGtpRzl3MEJBUUVGQUFTQ0JLY3dnZ1NqQWdFQUFvSUJBUUNrcEx2WmdOdC9BME1XXG5RdTR4UENkeWgreUg3Z09xdXNEQjMxMGNmL2xwN2sxNmJkL0U4aDdVTmp0S2xpaC90UVZCdjdiejh3c0pIUmpvXG5LZ1JDbjd3NmczVFB5Q0Q1YmtUUzFNT1RMMjRiU3F3YW9BUFhGd0k3eEt5NlNDTnpDckVPazJ2b3F6NE5LT3BhXG5Ob0UrV0lBblg0TVQvS0xWbERVMlRmaTE5R2NCbHBiMlFaOGlFWmtUQTZWNHFXM0oxSFNlemhtczhsTHZ1akJ4XG5namdlWlY1SDBJaHZjMFFDTG9rejhIelh6TnBzaXRaQWVSL0hIWVY2TElsUkhLTjRMb0p0RkY2NHBnQjREZkI3XG5OU1ZwdXQ3M3k5OE1lUkhUU1l5MmRqVlFHTlYzYVdqdmhVZzVyUE5DUjBKRWxKeENTaXZUMW4xUTJRU0d2eDZtXG4rVS9pTjhyUEFnTUJBQUVDZ2dFQUJNVk5PQ2syc3dCNk1pdndVUGJvUWJaVWtUL0JWWEloNGJ4Q0szQTkzdWJyXG4wYzliTzQ2MGUvdXlUaC81dm5HVlhHbGN1TWJqRy9sKzNrTlJha2pNb0hsc3FoU25kL0xOQmJycWk5cEUrMzkzXG5heklWeW1hOVJ0UVZHVFM3NmpwMkZQRUtvR3hOMlAzRmhLV3Rxd3ZQY3RTZXJMYVdWVS85RnJ5OXB6QVV5QVAwXG5KZE4rUnVYcGRVMkJ6WTRGbm1MWnk2WXdlZmF2ekFtRHBjYTFVOWpJSHExdkIwSnFZWGFXU200VkIwL1Y0dzZFXG5RWXlMYjFKa29hd1p3VUZkeTBlaDBGTzNLVG9KOUtjNHVaQm1sWVBScy9NWlBNM21HNk9JVmVQeDY5ZEUxemNtXG5BZnBXcm4xV3VITWZNK1J5WFBIRHMzRW1yQXJiYzhtQ0ttaGVRaFB3S1FLQmdRRFZSY0VrYXVFYmM1c3dyQnRxXG5OUG1jR0dMeVlvS2x6TGIvTGgzVFhiOWozYnRnWFFzbjNzMmd2MEFLYlpERVpKRzJxU3hNQ1FvRHNLRWcvTXhhXG5wUTZQUEg4MlZ2bzUxTHBUTkhoZDJyWU5xVnEwblJTdzNCYWdoTUFvREVnd1I3cnA0emJ2WWVvTkwxd1FmTTl0XG5rbmxBL2szVE40YTBOaTRrYkQzS0F3UjN5d0tCZ1FERm9PbWRrL3FuaE5zSFU2ZmZxdVpxTDFXenZXZkJRcVo4XG50Mm5vSFlRbFd5M2Rzd21vZlV5NFU3N3h3VmJBVHRQZjBXM2JjY3hWc2tKSXZJRHhoRmJ3WFVSNkZUVkJXc2F3XG5jcXIzQmJycDMyTWUydERHckV0VGlPVGJBcTFlV015cHc5NFpHaFdCOExMa1E1SXd5bjJuWndMcmRHRlljM3lnXG51blZWUit0d2pRS0JnUURFSVJMZWw3K09URUZpZ0ZQeDF1ZlpxZitNb2hVOXZrWm40NVFnSmUxMVBKc3Rva001XG5Ga2VKMVhkaVg2SXJTY1ZLOHNhKzRlMG91WTFrVmJia2dUZ2pkMklScWVrUnQ5VGw0Ykh6dGNlSVJyOFlyQUEwXG5LU0pSMmVNUXNVTE55NWxUcjlWOFRWSTliOWs5ZkZvUmpRc2ZQdjU1Q0c1RFdqUVM0TXZ0ODlwdkF3S0JnQ1BuXG5JY25RMGw2UWtrSDRmcXp4blQ1NUx2YWlXREVSN0xFSG1TUGtDTmZJZ3NLZC9na3dDam9LYk12NnRjbm1DRFBIXG5GbEFoa2wrWExOZU45MWJwRmVwQTUybk9aSWhrOTJlaXpWQm5LQmZJejNjUmhaQXlTTlQ2TDZsaUNTRDB2T215XG5mYXpOd2s2QzdkenVNYWJQZ3BoZzdIVGVCYmxOOGJ3NVk0dUVyMDhaQW9HQUpBeURiZmthN0hWNlJwWkkvTEcxXG5Zakd0OU5QNm42UklxRUUxVWFiMEFaY2pJWDVlK0RTUnV6RVRUSXRiN0hNYUZVOE1sWmdDTWNmWDk1ckN1RmhhXG45bkFQeGZNWEd6WVdXRlVMWmNPbWdBTWQ2RkNDUlFNSjVEUm9JYU5iNGIwdVhUYWk1Z1BaUDdGQ3hNTVdrbzJQXG56NFV0WFBCVXY1am8zZUxYUHFWVVNCQT1cbi0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS1cbiIsCiAgImNsaWVudF9lbWFpbCI6ICJwci1rdWJla2luc0BrdWJlcm5ldGVzLWplbmtpbnMtcHVsbC5pYW0uZ3NlcnZpY2VhY2NvdW50LmNvbSIsCiAgImNsaWVudF9pZCI6ICIxMTM2MDA2MzMyODUwMDYyODMxMzAiLAogICJhdXRoX3VyaSI6ICJodHRwczovL2FjY291bnRzLmdvb2dsZS5jb20vby9vYXV0aDIvYXV0aCIsCiAgInRva2VuX3VyaSI6ICJodHRwczovL29hdXRoMi5nb29nbGVhcGlzLmNvbS90b2tlbiIsCiAgImF1dGhfcHJvdmlkZXJfeDUwOV9jZXJ0X3VybCI6ICJodHRwczovL3d3dy5nb29nbGVhcGlzLmNvbS9vYXV0aDIvdjEvY2VydHMiLAogICJjbGllbnRfeDUwOV9jZXJ0X3VybCI6ICJodHRwczovL3d3dy5nb29nbGVhcGlzLmNvbS9yb2JvdC92MS9tZXRhZGF0YS94NTA5L3ByLWt1YmVraW5zJTQwa3ViZXJuZXRlcy1qZW5raW5zLXB1bGwuaWFtLmdzZXJ2aWNlYWNjb3VudC5jb20iCn0K"},"kind":"Secret","metadata":{"creationTimestamp":null,"name":"cloud-sa"}}
W0215 16:30:54.654] I0215 16:30:54.650881   64591 round_trippers.go:419] curl -k -v -XPOST  -H "Accept: application/json" -H "User-Agent: kubectl/v1.13.5 (linux/amd64) kubernetes/2166946" -H "Authorization: Bearer ZjRN9LQDu1hS1cwMefanPbVadfaVUwpZ" 'https://34.69.11.29/api/v1/namespaces/default/secrets'
... skipping 29 lines ...
W0215 16:30:54.880]     "name": "cluster-admin-binding",
W0215 16:30:54.881]     "group": "rbac.authorization.k8s.io",
W0215 16:30:54.881]     "kind": "clusterrolebindings"
W0215 16:30:54.881]   },
W0215 16:30:54.881]   "code": 404
W0215 16:30:54.881] }]
W0215 16:30:54.881] F0215 16:30:54.848423   64633 helpers.go:116] Error from server (NotFound): clusterrolebindings.rbac.authorization.k8s.io "cluster-admin-binding" not found
W0215 16:30:54.882] ++ gcloud config get-value account
W0215 16:30:55.356] + /go/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver/test/k8s-integration/src/k8s.io/kubernetes/cluster/kubectl.sh create clusterrolebinding cluster-admin-binding -v=9 --clusterrole cluster-admin --user pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
W0215 16:30:55.509] I0215 16:30:55.508771   64689 loader.go:359] Config loaded from file /root/.kube/config
W0215 16:30:55.510] I0215 16:30:55.509998   64689 loader.go:359] Config loaded from file /root/.kube/config
W0215 16:30:55.511] I0215 16:30:55.511624   64689 loader.go:359] Config loaded from file /root/.kube/config
W0215 16:30:55.517] I0215 16:30:55.517150   64689 request.go:942] Request Body: {"apiVersion":"rbac.authorization.k8s.io/v1beta1","kind":"ClusterRoleBinding","metadata":{"creationTimestamp":null,"name":"cluster-admin-binding"},"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"cluster-admin"},"subjects":[{"apiGroup":"rbac.authorization.k8s.io","kind":"User","name":"pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com"}]}
... skipping 562 lines ...
W0215 16:30:56.254] 00000670  74 65 73 29 20 6f 72 20  61 20 63 6f 6e 66 69 67  |tes) or a config|
W0215 16:30:56.255] 00000680  75 72 61 74 69 6f 6e 20  63 68 61 6e 67 65 20 6f  |uration change o|
W0215 16:30:56.255] 00000690  6e 20 74 68 65 20 73 65  72 76 65 72 2c 20 74 68  |n the server, th|
W0215 16:30:56.255] 000006a0  65 20 73 65 72 76 65 72  20 77 69 6c 6c 20 72 65  |e server will re|
W0215 16:30:56.255] 000006b0  73 70 6f 6e 64 20 77 69  74 68 20 61 20 34 31 30  |spond with a 410|
W0215 16:30:56.256] 000006c0  20 52 65 73 6f 75 72 63  65 45 78 70 69 72 65 64  | ResourceExpired|
W0215 16:30:56.256] 000006d0  20 65 72 72 6f 72 20 74  6f 67 65 74 68 65 72 20  | error together |
W0215 16:30:56.256] 000006e0  77 69 74 68 20 61 20 63  6f 6e 74 69 6e 75 65 20  |with a continue |
W0215 16:30:56.256] 000006f0  74 6f 6b 65 6e 2e 20 49  66 20 74 68 65 20 63 6c  |token. If the cl|
W0215 16:30:56.256] 00000700  69 65 6e 74 20 6e 65 65  64 73 20 61 20 63 6f 6e  |ient needs a con|
W0215 16:30:56.256] 00000710  73 69 73 74 65 6e 74 20  6c 69 73 74 2c 20 69 74  |sistent list, it|
W0215 16:30:56.256] 00000720  20 6d 75 73 74 20 72 65  73 74 61 72 74 20 74 68  | must restart th|
W0215 16:30:56.257] 00000730  65 69 72 20 6c 69 73 74  20 77 69 74 68 6f 75 74  |eir list without|
... skipping 799 lines ...
I0215 16:31:18.224] 
I0215 16:31:32.118] Feb 15 16:31:23.458: INFO: cluster-master-image: cos-stable-65-10323-64-0
I0215 16:31:32.118] Feb 15 16:31:23.458: INFO: cluster-node-image: cos-stable-65-10323-64-0
I0215 16:31:32.118] Feb 15 16:31:23.458: INFO: >>> kubeConfig: /root/.kube/config
I0215 16:31:32.118] Feb 15 16:31:23.462: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable
I0215 16:31:32.119] Feb 15 16:31:23.481: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready
I0215 16:31:32.119] Feb 15 16:31:23.523: INFO: The status of Pod coredns-fff89c9b9-mdrpb is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.119] Feb 15 16:31:23.523: INFO: The status of Pod fluentd-gcp-v3.2.0-fvz84 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.120] Feb 15 16:31:23.523: INFO: The status of Pod heapster-v1.6.0-beta.1-59667b5b94-rd58j is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.120] Feb 15 16:31:23.523: INFO: The status of Pod heapster-v1.6.0-beta.1-5f77ccd9dd-rdlb4 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.120] Feb 15 16:31:23.523: INFO: The status of Pod metrics-server-v0.3.1-58d65f8d6-684rz is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.121] Feb 15 16:31:23.523: INFO: The status of Pod metrics-server-v0.3.1-78d65bc5fd-d7lw4 is Running (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.121] Feb 15 16:31:23.523: INFO: 21 / 27 pods in namespace 'kube-system' are running and ready (0 seconds elapsed)
I0215 16:31:32.121] Feb 15 16:31:23.523: INFO: expected 9 pod replicas in namespace 'kube-system', 6 are Running and Ready.
I0215 16:31:32.122] Feb 15 16:31:23.524: INFO: POD                                      NODE                             PHASE    GRACE  CONDITIONS
I0215 16:31:32.122] Feb 15 16:31:23.524: INFO: coredns-fff89c9b9-mdrpb                  e2e-test-prow-minion-group-8xzw  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:05 +0000 UTC  }]
I0215 16:31:32.124] Feb 15 16:31:23.524: INFO: fluentd-gcp-v3.2.0-fvz84                 e2e-test-prow-minion-group-8xzw  Pending  60s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:25 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:25 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:25 +0000 UTC  }]
I0215 16:31:32.125] Feb 15 16:31:23.524: INFO: heapster-v1.6.0-beta.1-59667b5b94-rd58j  e2e-test-prow-minion-group-4qgb  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:24 +0000 UTC ContainersNotReady containers with unready status: [heapster heapster-nanny]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:24 +0000 UTC ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:24 +0000 UTC  }]
I0215 16:31:32.126] Feb 15 16:31:23.524: INFO: heapster-v1.6.0-beta.1-5f77ccd9dd-rdlb4  e2e-test-prow-minion-group-8xzw  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [heapster heapster-nanny]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:43 +0000 UTC  }]
I0215 16:31:32.126] Feb 15 16:31:23.524: INFO: metrics-server-v0.3.1-58d65f8d6-684rz    e2e-test-prow-minion-group-8xzw  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [metrics-server metrics-server-nanny]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [metrics-server metrics-server-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:43 +0000 UTC  }]
I0215 16:31:32.128] Feb 15 16:31:23.524: INFO: metrics-server-v0.3.1-78d65bc5fd-d7lw4   e2e-test-prow-minion-group-4qgb  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:24 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:24 +0000 UTC ContainersNotReady containers with unready status: [metrics-server metrics-server-nanny]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:24 +0000 UTC ContainersNotReady containers with unready status: [metrics-server metrics-server-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:24 +0000 UTC  }]
I0215 16:31:32.128] Feb 15 16:31:23.524: INFO: 
I0215 16:31:32.128] Feb 15 16:31:25.969: INFO: The status of Pod coredns-fff89c9b9-mdrpb is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.129] Feb 15 16:31:25.969: INFO: The status of Pod fluentd-gcp-v3.2.0-bz8w5 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.129] Feb 15 16:31:25.969: INFO: The status of Pod heapster-v1.6.0-beta.1-5f77ccd9dd-rdlb4 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.129] Feb 15 16:31:25.969: INFO: 22 / 25 pods in namespace 'kube-system' are running and ready (2 seconds elapsed)
I0215 16:31:32.130] Feb 15 16:31:25.969: INFO: expected 9 pod replicas in namespace 'kube-system', 7 are Running and Ready.
I0215 16:31:32.130] Feb 15 16:31:25.969: INFO: POD                                      NODE                             PHASE    GRACE  CONDITIONS
I0215 16:31:32.131] Feb 15 16:31:25.969: INFO: coredns-fff89c9b9-mdrpb                  e2e-test-prow-minion-group-8xzw  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [coredns]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:05 +0000 UTC  }]
I0215 16:31:32.131] Feb 15 16:31:25.969: INFO: fluentd-gcp-v3.2.0-bz8w5                                                  Pending         []
I0215 16:31:32.131] Feb 15 16:31:25.969: INFO: heapster-v1.6.0-beta.1-5f77ccd9dd-rdlb4  e2e-test-prow-minion-group-8xzw  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [heapster heapster-nanny]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:43 +0000 UTC  }]
I0215 16:31:32.132] Feb 15 16:31:25.969: INFO: 
I0215 16:31:32.132] Feb 15 16:31:27.882: INFO: The status of Pod fluentd-gcp-v3.2.0-bz8w5 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.133] Feb 15 16:31:27.882: INFO: The status of Pod fluentd-gcp-v3.2.0-n9pvp is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.133] Feb 15 16:31:27.882: INFO: The status of Pod fluentd-gcp-v3.2.0-xj2qc is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.133] Feb 15 16:31:27.882: INFO: The status of Pod heapster-v1.6.0-beta.1-5f77ccd9dd-rdlb4 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.134] Feb 15 16:31:27.882: INFO: 24 / 28 pods in namespace 'kube-system' are running and ready (4 seconds elapsed)
I0215 16:31:32.134] Feb 15 16:31:27.882: INFO: expected 9 pod replicas in namespace 'kube-system', 8 are Running and Ready.
I0215 16:31:32.134] Feb 15 16:31:27.882: INFO: POD                                      NODE                             PHASE    GRACE  CONDITIONS
I0215 16:31:32.134] Feb 15 16:31:27.882: INFO: fluentd-gcp-v3.2.0-bz8w5                 e2e-test-prow-minion-group-8xzw  Pending         [{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:25 +0000 UTC  }]
I0215 16:31:32.135] Feb 15 16:31:27.882: INFO: fluentd-gcp-v3.2.0-n9pvp                 e2e-test-prow-minion-group-4qgb  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:26 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:26 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:26 +0000 UTC  }]
I0215 16:31:32.136] Feb 15 16:31:27.882: INFO: fluentd-gcp-v3.2.0-xj2qc                 e2e-test-prow-master             Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:26 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:26 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:26 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:26 +0000 UTC  }]
I0215 16:31:32.137] Feb 15 16:31:27.882: INFO: heapster-v1.6.0-beta.1-5f77ccd9dd-rdlb4  e2e-test-prow-minion-group-8xzw  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [heapster heapster-nanny]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:43 +0000 UTC  }]
I0215 16:31:32.137] Feb 15 16:31:27.882: INFO: 
I0215 16:31:32.137] Feb 15 16:31:29.926: INFO: The status of Pod fluentd-gcp-v3.2.0-bz8w5 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.138] Feb 15 16:31:29.926: INFO: The status of Pod heapster-v1.6.0-beta.1-5f77ccd9dd-rdlb4 is Pending (Ready = false), waiting for it to be either Running (with Ready = true) or Failed
I0215 16:31:32.138] Feb 15 16:31:29.926: INFO: 26 / 28 pods in namespace 'kube-system' are running and ready (6 seconds elapsed)
I0215 16:31:32.138] Feb 15 16:31:29.926: INFO: expected 9 pod replicas in namespace 'kube-system', 8 are Running and Ready.
I0215 16:31:32.139] Feb 15 16:31:29.926: INFO: POD                                      NODE                             PHASE    GRACE  CONDITIONS
I0215 16:31:32.140] Feb 15 16:31:29.926: INFO: fluentd-gcp-v3.2.0-bz8w5                 e2e-test-prow-minion-group-8xzw  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:25 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:25 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:25 +0000 UTC ContainersNotReady containers with unready status: [fluentd-gcp prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:25 +0000 UTC  }]
I0215 16:31:32.140] Feb 15 16:31:29.926: INFO: heapster-v1.6.0-beta.1-5f77ccd9dd-rdlb4  e2e-test-prow-minion-group-8xzw  Pending         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [heapster heapster-nanny]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC ContainersNotReady containers with unready status: [heapster heapster-nanny]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:43 +0000 UTC  }]
I0215 16:31:32.141] Feb 15 16:31:29.926: INFO: 
... skipping 369 lines ...
I0215 16:32:38.586] STEP: creating a claim
I0215 16:32:38.586] Feb 15 16:31:34.045: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-cq7qf] to have phase Bound
I0215 16:32:38.586] Feb 15 16:31:34.088: INFO: PersistentVolumeClaim pvc-cq7qf found but phase is Pending instead of Bound.
I0215 16:32:38.587] Feb 15 16:31:36.102: INFO: PersistentVolumeClaim pvc-cq7qf found but phase is Pending instead of Bound.
I0215 16:32:38.587] Feb 15 16:31:38.117: INFO: PersistentVolumeClaim pvc-cq7qf found but phase is Pending instead of Bound.
I0215 16:32:38.587] Feb 15 16:31:40.184: INFO: PersistentVolumeClaim pvc-cq7qf found and phase=Bound (6.139603804s)
I0215 16:32:38.587] [It] should fail if subpath directory is outside the volume [Slow]
I0215 16:32:38.587]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:245
I0215 16:32:38.588] STEP: Creating pod pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-4jzg
I0215 16:32:38.588] STEP: Checking for subpath error in container status
I0215 16:32:38.588] Feb 15 16:32:02.437: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-4jzg" in namespace "e2e-tests-csi-volumes-rvc6c"
I0215 16:32:38.588] Feb 15 16:32:02.586: INFO: Wait up to 5m0s for pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-4jzg" to be fully deleted
I0215 16:32:38.588] WARNING: pod log: pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-4jzg/test-container-subpath-pd-csi-storage-gke-io-dynamicpv-4jzg: pods "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-4jzg" not found
I0215 16:32:38.589] WARNING: pod log: pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-4jzg/test-container-volume-pd-csi-storage-gke-io-dynamicpv-4jzg: pods "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-4jzg" not found
I0215 16:32:38.589] [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
I0215 16:32:38.589]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:120
... skipping 26 lines ...
I0215 16:32:38.595] [sig-storage] CSI Volumes
I0215 16:32:38.595] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0215 16:32:38.595]   [Driver: pd.csi.storage.gke.io][Feature: gcePD-external]
I0215 16:32:38.595]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:129
I0215 16:32:38.595]     [Testpattern: Dynamic PV (default fs)] subPath
I0215 16:32:38.596]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:99
I0215 16:32:38.596]       should fail if subpath directory is outside the volume [Slow]
I0215 16:32:38.596]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:245
I0215 16:32:38.596] ------------------------------
I0215 16:32:38.596] Feb 15 16:32:38.583: INFO: Running AfterSuite actions on all nodes
I0215 16:32:38.596] 
I0215 16:32:42.067] 
I0215 16:32:42.067] [BeforeEach] [sig-storage] CSI Volumes
... skipping 19 lines ...
I0215 16:32:42.071] Feb 15 16:31:38.766: INFO: PersistentVolumeClaim pvc-lnkl4 found but phase is Pending instead of Bound.
I0215 16:32:42.071] Feb 15 16:31:40.856: INFO: PersistentVolumeClaim pvc-lnkl4 found but phase is Pending instead of Bound.
I0215 16:32:42.072] Feb 15 16:31:43.028: INFO: PersistentVolumeClaim pvc-lnkl4 found but phase is Pending instead of Bound.
I0215 16:32:42.072] Feb 15 16:31:45.117: INFO: PersistentVolumeClaim pvc-lnkl4 found but phase is Pending instead of Bound.
I0215 16:32:42.072] Feb 15 16:31:47.222: INFO: PersistentVolumeClaim pvc-lnkl4 found but phase is Pending instead of Bound.
I0215 16:32:42.072] Feb 15 16:31:49.243: INFO: PersistentVolumeClaim pvc-lnkl4 found and phase=Bound (14.587427411s)
I0215 16:32:42.073] [It] should fail if subpath file is outside the volume [Slow]
I0215 16:32:42.073]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:253
I0215 16:32:42.073] STEP: Creating pod pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-26gv
I0215 16:32:42.073] STEP: Checking for subpath error in container status
I0215 16:32:42.074] Feb 15 16:32:09.483: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-26gv" in namespace "e2e-tests-csi-volumes-vlfr9"
I0215 16:32:42.074] Feb 15 16:32:09.661: INFO: Wait up to 5m0s for pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-26gv" to be fully deleted
I0215 16:32:42.074] [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
I0215 16:32:42.074]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:120
I0215 16:32:42.074] STEP: Deleting pod
I0215 16:32:42.075] Feb 15 16:32:15.765: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-26gv" in namespace "e2e-tests-csi-volumes-vlfr9"
... skipping 23 lines ...
I0215 16:32:42.080] [sig-storage] CSI Volumes
I0215 16:32:42.080] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0215 16:32:42.080]   [Driver: pd.csi.storage.gke.io][Feature: gcePD-external]
I0215 16:32:42.080]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:129
I0215 16:32:42.081]     [Testpattern: Dynamic PV (default fs)] subPath
I0215 16:32:42.081]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:99
I0215 16:32:42.081]       should fail if subpath file is outside the volume [Slow]
I0215 16:32:42.081]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:253
I0215 16:32:42.081] ------------------------------
I0215 16:32:42.081] Feb 15 16:32:42.067: INFO: Running AfterSuite actions on all nodes
I0215 16:32:42.082] 
I0215 16:32:46.503] 
I0215 16:32:46.503] [BeforeEach] [sig-storage] CSI Volumes
... skipping 794 lines ...
I0215 16:33:08.150] Feb 15 16:31:45.834: INFO: PersistentVolumeClaim pvc-mgdj5 found but phase is Pending instead of Bound.
I0215 16:33:08.150] Feb 15 16:31:48.027: INFO: PersistentVolumeClaim pvc-mgdj5 found but phase is Pending instead of Bound.
I0215 16:33:08.151] Feb 15 16:31:50.032: INFO: PersistentVolumeClaim pvc-mgdj5 found but phase is Pending instead of Bound.
I0215 16:33:08.151] Feb 15 16:31:52.036: INFO: PersistentVolumeClaim pvc-mgdj5 found but phase is Pending instead of Bound.
I0215 16:33:08.151] Feb 15 16:31:54.040: INFO: PersistentVolumeClaim pvc-mgdj5 found but phase is Pending instead of Bound.
I0215 16:33:08.151] Feb 15 16:31:56.121: INFO: PersistentVolumeClaim pvc-mgdj5 found and phase=Bound (20.678782866s)
I0215 16:33:08.151] [It] should fail if non-existent subpath is outside the volume [Slow]
I0215 16:33:08.151]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:261
I0215 16:33:08.151] STEP: Creating pod pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-8rd5
I0215 16:33:08.152] STEP: Checking for subpath error in container status
I0215 16:33:08.152] Feb 15 16:32:32.710: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-8rd5" in namespace "e2e-tests-csi-volumes-zcng2"
I0215 16:33:08.152] Feb 15 16:32:32.719: INFO: Wait up to 5m0s for pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-8rd5" to be fully deleted
I0215 16:33:08.152] [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
I0215 16:33:08.152]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:120
I0215 16:33:08.152] STEP: Deleting pod
I0215 16:33:08.152] Feb 15 16:32:36.767: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-8rd5" in namespace "e2e-tests-csi-volumes-zcng2"
... skipping 24 lines ...
I0215 16:33:08.157] [sig-storage] CSI Volumes
I0215 16:33:08.157] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0215 16:33:08.157]   [Driver: pd.csi.storage.gke.io][Feature: gcePD-external]
I0215 16:33:08.157]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:129
I0215 16:33:08.157]     [Testpattern: Dynamic PV (default fs)] subPath
I0215 16:33:08.158]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:99
I0215 16:33:08.158]       should fail if non-existent subpath is outside the volume [Slow]
I0215 16:33:08.158]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:261
I0215 16:33:08.158] ------------------------------
I0215 16:33:08.158] Feb 15 16:33:08.144: INFO: Running AfterSuite actions on all nodes
I0215 16:33:08.158] 
I0215 16:33:13.593] 
I0215 16:33:13.593] [BeforeEach] [sig-storage] CSI Volumes
... skipping 671 lines ...
I0215 16:33:41.352] Feb 15 16:31:46.376: INFO: PersistentVolumeClaim pvc-n6cbv found but phase is Pending instead of Bound.
I0215 16:33:41.352] Feb 15 16:31:48.449: INFO: PersistentVolumeClaim pvc-n6cbv found but phase is Pending instead of Bound.
I0215 16:33:41.352] Feb 15 16:31:50.456: INFO: PersistentVolumeClaim pvc-n6cbv found but phase is Pending instead of Bound.
I0215 16:33:41.353] Feb 15 16:31:52.469: INFO: PersistentVolumeClaim pvc-n6cbv found but phase is Pending instead of Bound.
I0215 16:33:41.353] Feb 15 16:31:54.514: INFO: PersistentVolumeClaim pvc-n6cbv found but phase is Pending instead of Bound.
I0215 16:33:41.353] Feb 15 16:31:56.602: INFO: PersistentVolumeClaim pvc-n6cbv found and phase=Bound (20.750745675s)
I0215 16:33:41.353] [It] should fail if subpath with backstepping is outside the volume [Slow]
I0215 16:33:41.353]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:269
I0215 16:33:41.353] STEP: Creating pod pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-9cf9
I0215 16:33:41.354] STEP: Checking for subpath error in container status
I0215 16:33:41.354] Feb 15 16:33:05.064: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-9cf9" in namespace "e2e-tests-csi-volumes-2j6km"
I0215 16:33:41.354] Feb 15 16:33:05.078: INFO: Wait up to 5m0s for pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-9cf9" to be fully deleted
I0215 16:33:41.354] [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath
I0215 16:33:41.354]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:120
I0215 16:33:41.354] STEP: Deleting pod
I0215 16:33:41.355] Feb 15 16:33:15.105: INFO: Deleting pod "pod-subpath-test-pd-csi-storage-gke-io-dynamicpv-9cf9" in namespace "e2e-tests-csi-volumes-2j6km"
... skipping 23 lines ...
I0215 16:33:41.359] [sig-storage] CSI Volumes
I0215 16:33:41.359] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0215 16:33:41.359]   [Driver: pd.csi.storage.gke.io][Feature: gcePD-external]
I0215 16:33:41.360]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:129
I0215 16:33:41.360]     [Testpattern: Dynamic PV (default fs)] subPath
I0215 16:33:41.360]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:99
I0215 16:33:41.360]       should fail if subpath with backstepping is outside the volume [Slow]
I0215 16:33:41.360]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:269
I0215 16:33:41.361] ------------------------------
I0215 16:33:41.361] Feb 15 16:33:41.347: INFO: Running AfterSuite actions on all nodes
I0215 16:33:41.361] 
I0215 16:33:59.155] 
I0215 16:33:59.156] [BeforeEach] [sig-storage] CSI Volumes
... skipping 297 lines ...
I0215 16:36:50.092] [BeforeEach] [sig-storage] CSI Volumes
I0215 16:36:50.093]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:87
I0215 16:36:50.093] [BeforeEach] [Driver: pd.csi.storage.gke.io][Feature: gcePD-external]
I0215 16:36:50.093]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:132
I0215 16:36:50.093] [BeforeEach] [Testpattern: Dynamic PV (block volmode)] volumeMode
I0215 16:36:50.093]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:119
I0215 16:36:50.093] [It] should fail in binding dynamic provisioned PV to PVC
I0215 16:36:50.094]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:315
I0215 16:36:50.094] STEP: Creating sc
I0215 16:36:50.094] STEP: Creating pv and pvc
I0215 16:36:50.094] Feb 15 16:31:35.260: INFO: Waiting up to 5m0s for PersistentVolumeClaims [pvc-v54zg] to have phase Bound
I0215 16:36:50.094] Feb 15 16:31:35.311: INFO: PersistentVolumeClaim pvc-v54zg found but phase is Pending instead of Bound.
I0215 16:36:50.094] Feb 15 16:31:37.315: INFO: PersistentVolumeClaim pvc-v54zg found but phase is Pending instead of Bound.
... skipping 166 lines ...
I0215 16:36:50.124] [sig-storage] CSI Volumes
I0215 16:36:50.125] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
I0215 16:36:50.125]   [Driver: pd.csi.storage.gke.io][Feature: gcePD-external]
I0215 16:36:50.125]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:129
I0215 16:36:50.125]     [Testpattern: Dynamic PV (block volmode)] volumeMode
I0215 16:36:50.125]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:109
I0215 16:36:50.125]       should fail in binding dynamic provisioned PV to PVC
I0215 16:36:50.126]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumemode.go:315
I0215 16:36:50.126] ------------------------------
I0215 16:36:50.126] Feb 15 16:36:50.090: INFO: Running AfterSuite actions on all nodes
I0215 16:36:50.126] 
I0215 16:37:49.181] 
I0215 16:37:49.183] [BeforeEach] [sig-storage] CSI Volumes
... skipping 119 lines ...
I0215 16:37:49.206] Feb 15 16:37:07.657: INFO: PersistentVolume pvc-a14b2dab-5010-11ea-b739-42010a800002 found and phase=Released (4m30.344404248s)
I0215 16:37:49.207] Feb 15 16:37:12.661: INFO: PersistentVolume pvc-a14b2dab-5010-11ea-b739-42010a800002 found and phase=Released (4m35.348873846s)
I0215 16:37:49.207] Feb 15 16:37:17.738: INFO: PersistentVolume pvc-a14b2dab-5010-11ea-b739-42010a800002 found and phase=Released (4m40.425404449s)
I0215 16:37:49.207] Feb 15 16:37:22.788: INFO: PersistentVolume pvc-a14b2dab-5010-11ea-b739-42010a800002 found and phase=Released (4m45.476093996s)
I0215 16:37:49.207] Feb 15 16:37:27.887: INFO: PersistentVolume pvc-a14b2dab-5010-11ea-b739-42010a800002 found and phase=Released (4m50.574394653s)
I0215 16:37:49.207] Feb 15 16:37:32.939: INFO: PersistentVolume pvc-a14b2dab-5010-11ea-b739-42010a800002 found and phase=Released (4m55.626914618s)
I0215 16:37:49.208] Feb 15 16:37:37.939: INFO: Unexpected error occurred: PersistentVolume pvc-a14b2dab-5010-11ea-b739-42010a800002 still exists within 5m0s
I0215 16:37:49.208] [AfterEach] [Driver: pd.csi.storage.gke.io][Feature: gcePD-external]
I0215 16:37:49.208]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:138
I0215 16:37:49.208] [AfterEach] [sig-storage] CSI Volumes
I0215 16:37:49.208]   /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:154
I0215 16:37:49.209] STEP: Collecting events from namespace "e2e-tests-csi-volumes-x9sqp".
I0215 16:37:49.209] STEP: Found 15 events.
I0215 16:37:49.209] Feb 15 16:37:38.037: INFO: At 2020-02-15 16:31:34 +0000 UTC - event for pvc-676nb: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "pd.csi.storage.gke.io" or manually created by system administrator
I0215 16:37:49.210] Feb 15 16:37:38.038: INFO: At 2020-02-15 16:31:39 +0000 UTC - event for pvc-676nb: {pd.csi.storage.gke.io_csi-gce-pd-controller-0_96752fc7-5010-11ea-945f-3abebc7d2f5d } Provisioning: External provisioner is provisioning volume for claim "e2e-tests-csi-volumes-x9sqp/pvc-676nb"
I0215 16:37:49.210] Feb 15 16:37:38.038: INFO: At 2020-02-15 16:31:43 +0000 UTC - event for pvc-676nb: {pd.csi.storage.gke.io_csi-gce-pd-controller-0_96752fc7-5010-11ea-945f-3abebc7d2f5d } ProvisioningSucceeded: Successfully provisioned volume pvc-a14b2dab-5010-11ea-b739-42010a800002
I0215 16:37:49.210] Feb 15 16:37:38.038: INFO: At 2020-02-15 16:31:45 +0000 UTC - event for csi-injector-ltx6: {default-scheduler } Scheduled: Successfully assigned e2e-tests-csi-volumes-x9sqp/csi-injector-ltx6 to e2e-test-prow-minion-group-kdsn
I0215 16:37:49.211] Feb 15 16:37:38.038: INFO: At 2020-02-15 16:32:00 +0000 UTC - event for csi-injector-ltx6: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-a14b2dab-5010-11ea-b739-42010a800002" : attachment timeout for volume projects/k8s-boskos-gce-project-01/zones/us-central1-b/disks/pvc-a14b2dab-5010-11ea-b739-42010a800002
I0215 16:37:49.211] Feb 15 16:37:38.038: INFO: At 2020-02-15 16:32:01 +0000 UTC - event for csi-injector-ltx6: {attachdetach-controller } FailedAttachVolume: AttachVolume.Attach failed for volume "pvc-a14b2dab-5010-11ea-b739-42010a800002" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
I0215 16:37:49.211] Feb 15 16:37:38.038: INFO: At 2020-02-15 16:32:17 +0000 UTC - event for csi-injector-ltx6: {attachdetach-controller } SuccessfulAttachVolume: AttachVolume.Attach succeeded for volume "pvc-a14b2dab-5010-11ea-b739-42010a800002" 
I0215 16:37:49.212] Feb 15 16:37:38.038: INFO: At 2020-02-15 16:32:19 +0000 UTC - event for csi-injector-ltx6: {kubelet e2e-test-prow-minion-group-kdsn} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine
I0215 16:37:49.212] Feb 15 16:37:38.038: INFO: At 2020-02-15 16:32:19 +0000 UTC - event for csi-injector-ltx6: {kubelet e2e-test-prow-minion-group-kdsn} Created: Created container
I0215 16:37:49.212] Feb 15 16:37:38.038: INFO: At 2020-02-15 16:32:19 +0000 UTC - event for csi-injector-ltx6: {kubelet e2e-test-prow-minion-group-kdsn} Started: Started container
I0215 16:37:49.212] Feb 15 16:37:38.038: INFO: At 2020-02-15 16:32:20 +0000 UTC - event for csi-client: {default-scheduler } Scheduled: Successfully assigned e2e-tests-csi-volumes-x9sqp/csi-client to e2e-test-prow-minion-group-kdsn
I0215 16:37:49.213] Feb 15 16:37:38.038: INFO: At 2020-02-15 16:32:29 +0000 UTC - event for csi-client: {kubelet e2e-test-prow-minion-group-kdsn} Created: Created container
... skipping 34 lines ...
I0215 16:37:49.229] Feb 15 16:37:38.243: INFO: metadata-proxy-v0.1-ddl94                     e2e-test-prow-minion-group-4qgb  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:24 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:24 +0000 UTC  }]
I0215 16:37:49.230] Feb 15 16:37:38.243: INFO: metadata-proxy-v0.1-hhmq4                     e2e-test-prow-minion-group-kdsn  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:25 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:02 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:02 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:25 +0000 UTC  }]
I0215 16:37:49.230] Feb 15 16:37:38.243: INFO: metrics-server-v0.3.1-58d65f8d6-684rz         e2e-test-prow-minion-group-8xzw  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:07 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:24 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:31:24 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-15 16:30:43 +0000 UTC  }]
I0215 16:37:49.230] Feb 15 16:37:38.243: INFO: 
I0215 16:37:49.230] Feb 15 16:37:38.338: INFO: 
I0215 16:37:49.231] Logging node info for node e2e-test-prow-master
I0215 16:37:49.235] Feb 15 16:37:38.438: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-test-prow-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-test-prow-master,UID:75c278ea-5010-11ea-b739-42010a800002,ResourceVersion:2914,Generation:0,CreationTimestamp:2020-02-15 16:30:21 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/fluentd-ds-ready: true,beta.kubernetes.io/instance-type: n1-standard-1,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-b,kubernetes.io/hostname: e2e-test-prow-master,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-central1-b/e2e-test-prow-master,Unschedulable:true,Taints:[{node.kubernetes.io/unschedulable  NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{32 0} {<nil>} 32 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3885404160 0} {<nil>} 3794340Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{32 0} {<nil>} 32 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3623260160 0} {<nil>} 3538340Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2020-02-15 16:30:34 +0000 UTC 2020-02-15 16:30:34 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2020-02-15 16:37:33 +0000 UTC 2020-02-15 16:30:21 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-15 16:37:33 +0000 UTC 2020-02-15 16:30:21 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-15 16:37:33 +0000 UTC 2020-02-15 16:30:21 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-15 16:37:33 +0000 UTC 2020-02-15 16:30:21 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.128.0.2} {ExternalIP 34.69.11.29} {InternalDNS e2e-test-prow-master.c.k8s-boskos-gce-project-01.internal} {Hostname e2e-test-prow-master.c.k8s-boskos-gce-project-01.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:8ff89ed6a8c311d12cb202d7c9587041,SystemUUID:8FF89ED6-A8C3-11D1-2CB2-02D7C9587041,BootID:635883ab-3e17-430b-9048-79e94a6c522d,KernelVersion:4.4.111+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.13.5,KubeProxyVersion:v1.13.5,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:f8d5231b67b9c53f60068b535a11811d29d1b3efd53d2b79f2a2591ea338e4f2 gcr.io/stackdriver-agents/stackdriver-logging-agent:0.6-1.6.0-1] 223242132} {[k8s.gcr.io/etcd@sha256:905d7ca17fd02bc24c0eba9a062753aba15db3e31422390bc3238eb762339b20 k8s.gcr.io/etcd:3.2.24-1] 219655340} {[k8s.gcr.io/kube-apiserver:v1.13.5] 180890537} {[k8s.gcr.io/kube-controller-manager:v1.13.5] 146171465} {[k8s.gcr.io/kube-addon-manager@sha256:2fd1daf3d3cf0e94a753f2263b60dbb0d42b107b5cde0c75ee3fc5c830e016e4 k8s.gcr.io/kube-addon-manager:v8.9] 99240637} {[k8s.gcr.io/prometheus-to-sd@sha256:6c0c742475363d537ff059136e5d5e4ab1f512ee0fd9b7ca42ea48bc309d1662 k8s.gcr.io/prometheus-to-sd:v0.3.1] 88077694} {[k8s.gcr.io/kube-scheduler:v1.13.5] 79542185} {[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:14f14351a03038b238232e60850a9cfa0dffbed0590321ef84216a432accc1ca k8s.gcr.io/ingress-gce-glbc-amd64:v1.2.3] 71797285} {[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:c9c0dea9dbdbdaca436e33947a7559dbe2a60ad2bfa4e77518cb7088e443e28a k8s.gcr.io/etcd-empty-dir-cleanup:3.2.24.0] 31724043} {[k8s.gcr.io/metadata-proxy@sha256:5be758058e67b578f7041498e2daca46ccd7426bc602d35ed0f50bd4a3659d50 k8s.gcr.io/metadata-proxy:v0.1.10] 8953717} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
I0215 16:37:49.235] Feb 15 16:37:38.438: INFO: 
I0215 16:37:49.235] Logging kubelet events for node e2e-test-prow-master
I0215 16:37:49.235] Feb 15 16:37:38.543: INFO: 
I0215 16:37:49.236] Logging pods the kubelet thinks is on node e2e-test-prow-master
I0215 16:37:49.236] Feb 15 16:37:38.654: INFO: fluentd-gcp-v3.2.0-xj2qc started at 2020-02-15 16:31:26 +0000 UTC (0+2 container statuses recorded)
I0215 16:37:49.236] Feb 15 16:37:38.654: INFO: 	Container fluentd-gcp ready: true, restart count 0
... skipping 17 lines ...
I0215 16:37:49.240] Feb 15 16:37:38.960: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.9 Latency:20.674561s}
I0215 16:37:49.240] Feb 15 16:37:38.960: INFO: {Operation:create Method:pod_worker_latency_microseconds Quantile:0.99 Latency:20.674561s}
I0215 16:37:49.240] Feb 15 16:37:38.960: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:18.44401s}
I0215 16:37:49.241] Feb 15 16:37:38.960: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.9 Latency:18.290512s}
I0215 16:37:49.241] Feb 15 16:37:38.960: INFO: 
I0215 16:37:49.242] Logging node info for node e2e-test-prow-minion-group-4qgb
I0215 16:37:49.247] Feb 15 16:37:39.038: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-test-prow-minion-group-4qgb,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-test-prow-minion-group-4qgb,UID:775a504c-5010-11ea-b739-42010a800002,ResourceVersion:2919,Generation:0,CreationTimestamp:2020-02-15 16:30:24 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/fluentd-ds-ready: true,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-b,kubernetes.io/hostname: e2e-test-prow-minion-group-4qgb,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"pd.csi.storage.gke.io":"projects/k8s-boskos-gce-project-01/zones/us-central1-b/instances/e2e-test-prow-minion-group-4qgb"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-central1-b/e2e-test-prow-minion-group-4qgb,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{64 0} {<nil>} 64 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7848767488 0} {<nil>} 7664812Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{64 0} {<nil>} 64 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7586623488 0} {<nil>} 7408812Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{FrequentContainerdRestart False 2020-02-15 16:37:18 +0000 UTC 2020-02-15 16:34:15 +0000 UTC FrequentContainerdRestart containerd is functioning properly} {CorruptDockerOverlay2 False 2020-02-15 16:37:18 +0000 UTC 2020-02-15 16:34:13 +0000 UTC CorruptDockerOverlay2 docker overlay2 is functioning properly} {KernelDeadlock False 2020-02-15 16:37:18 +0000 UTC 2020-02-15 16:29:11 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-02-15 16:37:18 +0000 UTC 2020-02-15 16:29:11 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {FrequentUnregisterNetDevice False 2020-02-15 16:37:18 +0000 UTC 2020-02-15 16:34:12 +0000 UTC UnregisterNetDevice node is functioning properly} {FrequentKubeletRestart False 2020-02-15 16:37:18 +0000 UTC 2020-02-15 16:34:12 +0000 UTC FrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-02-15 16:37:18 +0000 UTC 2020-02-15 16:34:14 +0000 UTC FrequentDockerRestart docker is functioning properly} {NetworkUnavailable False 2020-02-15 16:30:36 +0000 UTC 2020-02-15 16:30:36 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2020-02-15 16:37:35 +0000 UTC 2020-02-15 16:30:24 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-15 16:37:35 +0000 UTC 2020-02-15 16:30:24 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-15 16:37:35 +0000 UTC 2020-02-15 16:30:24 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-15 16:37:35 +0000 UTC 2020-02-15 16:30:24 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.128.0.4} {ExternalIP 35.226.238.19} {InternalDNS e2e-test-prow-minion-group-4qgb.c.k8s-boskos-gce-project-01.internal} {Hostname e2e-test-prow-minion-group-4qgb.c.k8s-boskos-gce-project-01.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:2e0487df2acce8b1f948611b0f8f36af,SystemUUID:2E0487DF-2ACC-E8B1-F948-611B0F8F36AF,BootID:1e0325d6-bd11-4b8c-90c4-56c6127fa7b2,KernelVersion:4.4.111+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.13.5,KubeProxyVersion:v1.13.5,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:f8d5231b67b9c53f60068b535a11811d29d1b3efd53d2b79f2a2591ea338e4f2 gcr.io/stackdriver-agents/stackdriver-logging-agent:0.6-1.6.0-1] 223242132} {[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:65527562c25ba2ecb1a6e8f80bf9aa5ad100be32c3417c4b31119eeea09b6080 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.4.0-gke.0] 133411124} {[k8s.gcr.io/event-exporter@sha256:7f9cd7cb04d6959b0aa960727d04fa86759008048c785397b7b0d9dff0007516 k8s.gcr.io/event-exporter:v0.2.3] 94171943} {[k8s.gcr.io/prometheus-to-sd@sha256:6c0c742475363d537ff059136e5d5e4ab1f512ee0fd9b7ca42ea48bc309d1662 k8s.gcr.io/prometheus-to-sd:v0.3.1] 88077694} {[k8s.gcr.io/kube-proxy:v1.13.5] 80186087} {[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1] 76016169} {[gcr.io/gke-release/csi-node-driver-registrar@sha256:dc4e6c0fa39134e71c26972a7e2d2fb0c371bfc449b434e0fb2532a9410a4139 gcr.io/gke-release/csi-node-driver-registrar:v1.0.1-gke.0] 68977975} {[k8s.gcr.io/metrics-server-amd64@sha256:78938f933822856f443e6827fe5b37d6cc2f74ae888ac8b33d06fdbe5f8c658b k8s.gcr.io/metrics-server-amd64:v0.3.1] 40767713} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[k8s.gcr.io/addon-resizer@sha256:a31822f30e947885d038812f4a5a5675e72f92c06cef17b1989c80426aa89012 k8s.gcr.io/addon-resizer:1.8.4] 38349857} {[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine] 16032814} {[k8s.gcr.io/metadata-proxy@sha256:5be758058e67b578f7041498e2daca46ccd7426bc602d35ed0f50bd4a3659d50 k8s.gcr.io/metadata-proxy:v0.1.10] 8953717} {[k8s.gcr.io/defaultbackend-amd64@sha256:4dc5e07c8ca4e23bddb3153737d7b8c556e5fb2f29c4558b7cd6e6df99c512c7 k8s.gcr.io/defaultbackend-amd64:1.5] 5132544} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
I0215 16:37:49.247] Feb 15 16:37:39.038: INFO: 
I0215 16:37:49.248] Logging kubelet events for node e2e-test-prow-minion-group-4qgb
I0215 16:37:49.248] Feb 15 16:37:39.143: INFO: 
I0215 16:37:49.248] Logging pods the kubelet thinks is on node e2e-test-prow-minion-group-4qgb
I0215 16:37:49.248] Feb 15 16:37:39.243: INFO: metadata-proxy-v0.1-ddl94 started at 2020-02-15 16:30:24 +0000 UTC (0+2 container statuses recorded)
I0215 16:37:49.248] Feb 15 16:37:39.243: INFO: 	Container metadata-proxy ready: true, restart count 0
... skipping 17 lines ...
I0215 16:37:49.252] Feb 15 16:37:39.596: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.99 Latency:45.865156s}
I0215 16:37:49.252] Feb 15 16:37:39.596: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:45.057049s}
I0215 16:37:49.252] Feb 15 16:37:39.596: INFO: {Operation:start_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:28.473522s}
I0215 16:37:49.252] Feb 15 16:37:39.596: INFO: {Operation:pull_image Method:docker_operations_latency_microseconds Quantile:0.99 Latency:12.699983s}
I0215 16:37:49.252] Feb 15 16:37:39.596: INFO: 
I0215 16:37:49.253] Logging node info for node e2e-test-prow-minion-group-8xzw
I0215 16:37:49.258] Feb 15 16:37:39.687: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-test-prow-minion-group-8xzw,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-test-prow-minion-group-8xzw,UID:7813b0a5-5010-11ea-b739-42010a800002,ResourceVersion:2924,Generation:0,CreationTimestamp:2020-02-15 16:30:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/fluentd-ds-ready: true,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-b,kubernetes.io/hostname: e2e-test-prow-minion-group-8xzw,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"pd.csi.storage.gke.io":"projects/k8s-boskos-gce-project-01/zones/us-central1-b/instances/e2e-test-prow-minion-group-8xzw"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-central1-b/e2e-test-prow-minion-group-8xzw,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{64 0} {<nil>} 64 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7848767488 0} {<nil>} 7664812Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{64 0} {<nil>} 64 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7586623488 0} {<nil>} 7408812Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{KernelDeadlock False 2020-02-15 16:37:15 +0000 UTC 2020-02-15 16:29:10 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-02-15 16:37:15 +0000 UTC 2020-02-15 16:29:10 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {FrequentUnregisterNetDevice False 2020-02-15 16:37:15 +0000 UTC 2020-02-15 16:34:11 +0000 UTC UnregisterNetDevice node is functioning properly} {FrequentKubeletRestart False 2020-02-15 16:37:15 +0000 UTC 2020-02-15 16:34:11 +0000 UTC FrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-02-15 16:37:15 +0000 UTC 2020-02-15 16:34:13 +0000 UTC FrequentDockerRestart docker is functioning properly} {FrequentContainerdRestart False 2020-02-15 16:37:15 +0000 UTC 2020-02-15 16:34:14 +0000 UTC FrequentContainerdRestart containerd is functioning properly} {CorruptDockerOverlay2 False 2020-02-15 16:37:15 +0000 UTC 2020-02-15 16:34:12 +0000 UTC CorruptDockerOverlay2 docker overlay2 is functioning properly} {NetworkUnavailable False 2020-02-15 16:30:36 +0000 UTC 2020-02-15 16:30:36 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2020-02-15 16:37:37 +0000 UTC 2020-02-15 16:30:25 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-15 16:37:37 +0000 UTC 2020-02-15 16:30:25 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-15 16:37:37 +0000 UTC 2020-02-15 16:30:25 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-15 16:37:37 +0000 UTC 2020-02-15 16:30:25 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.128.0.3} {ExternalIP 35.223.191.190} {InternalDNS e2e-test-prow-minion-group-8xzw.c.k8s-boskos-gce-project-01.internal} {Hostname e2e-test-prow-minion-group-8xzw.c.k8s-boskos-gce-project-01.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:7e5f13d28e0c0b35a90834936fa1bd6d,SystemUUID:7E5F13D2-8E0C-0B35-A908-34936FA1BD6D,BootID:2b459277-28a7-441b-a6f7-de2c41396d13,KernelVersion:4.4.111+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.13.5,KubeProxyVersion:v1.13.5,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:f8d5231b67b9c53f60068b535a11811d29d1b3efd53d2b79f2a2591ea338e4f2 gcr.io/stackdriver-agents/stackdriver-logging-agent:0.6-1.6.0-1] 223242132} {[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:65527562c25ba2ecb1a6e8f80bf9aa5ad100be32c3417c4b31119eeea09b6080 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.4.0-gke.0] 133411124} {[k8s.gcr.io/prometheus-to-sd@sha256:6c0c742475363d537ff059136e5d5e4ab1f512ee0fd9b7ca42ea48bc309d1662 k8s.gcr.io/prometheus-to-sd:v0.3.1] 88077694} {[k8s.gcr.io/kube-proxy:v1.13.5] 80186087} {[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1] 76016169} {[gcr.io/gke-release/csi-node-driver-registrar@sha256:dc4e6c0fa39134e71c26972a7e2d2fb0c371bfc449b434e0fb2532a9410a4139 gcr.io/gke-release/csi-node-driver-registrar:v1.0.1-gke.0] 68977975} {[k8s.gcr.io/metrics-server-amd64@sha256:78938f933822856f443e6827fe5b37d6cc2f74ae888ac8b33d06fdbe5f8c658b k8s.gcr.io/metrics-server-amd64:v0.3.1] 40767713} {[k8s.gcr.io/coredns@sha256:81936728011c0df9404cb70b95c17bbc8af922ec9a70d0561a5d01fefa6ffa51 k8s.gcr.io/coredns:1.2.6] 40017418} {[k8s.gcr.io/addon-resizer@sha256:a31822f30e947885d038812f4a5a5675e72f92c06cef17b1989c80426aa89012 k8s.gcr.io/addon-resizer:1.8.4] 38349857} {[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine] 16032814} {[k8s.gcr.io/metadata-proxy@sha256:5be758058e67b578f7041498e2daca46ccd7426bc602d35ed0f50bd4a3659d50 k8s.gcr.io/metadata-proxy:v0.1.10] 8953717} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
I0215 16:37:49.258] Feb 15 16:37:39.688: INFO: 
I0215 16:37:49.258] Logging kubelet events for node e2e-test-prow-minion-group-8xzw
I0215 16:37:49.259] Feb 15 16:37:39.792: INFO: 
I0215 16:37:49.259] Logging pods the kubelet thinks is on node e2e-test-prow-minion-group-8xzw
I0215 16:37:49.259] Feb 15 16:37:39.902: INFO: metadata-proxy-v0.1-b5sws started at 2020-02-15 16:30:25 +0000 UTC (0+2 container statuses recorded)
I0215 16:37:49.259] Feb 15 16:37:39.902: INFO: 	Container metadata-proxy ready: true, restart count 0
... skipping 19 lines ...
I0215 16:37:49.262] Feb 15 16:37:40.208: INFO: {Operation: Method:pod_start_latency_microseconds Quantile:0.9 Latency:34.404197s}
I0215 16:37:49.263] Feb 15 16:37:40.208: INFO: {Operation:start_container Method:docker_operations_latency_microseconds Quantile:0.99 Latency:30.191561s}
I0215 16:37:49.263] Feb 15 16:37:40.208: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.99 Latency:23.636771s}
I0215 16:37:49.263] Feb 15 16:37:40.208: INFO: {Operation: Method:pod_worker_start_latency_microseconds Quantile:0.9 Latency:10.829197s}
I0215 16:37:49.263] Feb 15 16:37:40.208: INFO: 
I0215 16:37:49.263] Logging node info for node e2e-test-prow-minion-group-kdsn
I0215 16:37:49.270] Feb 15 16:37:40.287: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-test-prow-minion-group-kdsn,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-test-prow-minion-group-kdsn,UID:7804826b-5010-11ea-b739-42010a800002,ResourceVersion:2922,Generation:0,CreationTimestamp:2020-02-15 16:30:25 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/fluentd-ds-ready: true,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,failure-domain.beta.kubernetes.io/region: us-central1,failure-domain.beta.kubernetes.io/zone: us-central1-b,kubernetes.io/hostname: e2e-test-prow-minion-group-kdsn,},Annotations:map[string]string{csi.volume.kubernetes.io/nodeid: {"pd.csi.storage.gke.io":"projects/k8s-boskos-gce-project-01/zones/us-central1-b/instances/e2e-test-prow-minion-group-kdsn"},node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-boskos-gce-project-01/us-central1-b/e2e-test-prow-minion-group-kdsn,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{64 0} {<nil>} 64 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7848767488 0} {<nil>} 7664812Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{64 0} {<nil>} 64 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7586623488 0} {<nil>} 7408812Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{FrequentKubeletRestart False 2020-02-15 16:37:20 +0000 UTC 2020-02-15 16:34:15 +0000 UTC FrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-02-15 16:37:20 +0000 UTC 2020-02-15 16:34:16 +0000 UTC FrequentDockerRestart docker is functioning properly} {FrequentContainerdRestart False 2020-02-15 16:37:20 +0000 UTC 2020-02-15 16:34:18 +0000 UTC FrequentContainerdRestart containerd is functioning properly} {CorruptDockerOverlay2 False 2020-02-15 16:37:20 +0000 UTC 2020-02-15 16:34:16 +0000 UTC CorruptDockerOverlay2 docker overlay2 is functioning properly} {KernelDeadlock False 2020-02-15 16:37:20 +0000 UTC 2020-02-15 16:29:14 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-02-15 16:37:20 +0000 UTC 2020-02-15 16:29:14 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {FrequentUnregisterNetDevice False 2020-02-15 16:37:20 +0000 UTC 2020-02-15 16:34:15 +0000 UTC UnregisterNetDevice node is functioning properly} {NetworkUnavailable False 2020-02-15 16:30:36 +0000 UTC 2020-02-15 16:30:36 +0000 UTC RouteCreated RouteController created a route} {MemoryPressure False 2020-02-15 16:37:37 +0000 UTC 2020-02-15 16:30:25 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-02-15 16:37:37 +0000 UTC 2020-02-15 16:30:25 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-02-15 16:37:37 +0000 UTC 2020-02-15 16:30:25 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-02-15 16:37:37 +0000 UTC 2020-02-15 16:30:25 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.128.0.5} {ExternalIP 35.222.47.245} {InternalDNS e2e-test-prow-minion-group-kdsn.c.k8s-boskos-gce-project-01.internal} {Hostname e2e-test-prow-minion-group-kdsn.c.k8s-boskos-gce-project-01.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c59e45212a02d2ea65fa00bfc4d76037,SystemUUID:C59E4521-2A02-D2EA-65FA-00BFC4D76037,BootID:45308c23-eed8-445a-a2a0-98eeda026dd6,KernelVersion:4.4.111+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://17.3.2,KubeletVersion:v1.13.5,KubeProxyVersion:v1.13.5,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:f8d5231b67b9c53f60068b535a11811d29d1b3efd53d2b79f2a2591ea338e4f2 gcr.io/stackdriver-agents/stackdriver-logging-agent:0.6-1.6.0-1] 223242132} {[gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver@sha256:65527562c25ba2ecb1a6e8f80bf9aa5ad100be32c3417c4b31119eeea09b6080 gcr.io/gke-release/gcp-compute-persistent-disk-csi-driver:v0.4.0-gke.0] 133411124} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[k8s.gcr.io/fluentd-gcp-scaler@sha256:457a13df66534b94bab627c4c2dc2df0ee5153a5d0f0afd27502bd46bd8da81d k8s.gcr.io/fluentd-gcp-scaler:0.5] 103488147} {[k8s.gcr.io/prometheus-to-sd@sha256:6c0c742475363d537ff059136e5d5e4ab1f512ee0fd9b7ca42ea48bc309d1662 k8s.gcr.io/prometheus-to-sd:v0.3.1] 88077694} {[gcr.io/gke-release/csi-provisioner@sha256:947ea3f13799a4759d811ff088611e9d20b5c12451ba139537b5f154d6db359c gcr.io/gke-release/csi-provisioner:v1.0.1-gke.0] 80410503} {[k8s.gcr.io/kube-proxy:v1.13.5] 80186087} {[gcr.io/gke-release/csi-node-driver-registrar@sha256:dc4e6c0fa39134e71c26972a7e2d2fb0c371bfc449b434e0fb2532a9410a4139 gcr.io/gke-release/csi-node-driver-registrar:v1.0.1-gke.0] 68977975} {[gcr.io/gke-release/csi-attacher@sha256:ffb2694c1228c9da3f5516cced5bf54c35b0bc1debd74890fa10fdbe5977dbea gcr.io/gke-release/csi-attacher:v1.0.1-gke.0] 63581374} {[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:4fd37c5b29a38b02c408c56254bd1a3a76f3e236610bc7a8382500bbf9ecfc76 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.3.0] 45844959} {[nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 nginx:1.14-alpine] 16032814} {[k8s.gcr.io/metadata-proxy@sha256:5be758058e67b578f7041498e2daca46ccd7426bc602d35ed0f50bd4a3659d50 k8s.gcr.io/metadata-proxy:v0.1.10] 8953717} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
I0215 16:37:49.270] Feb 15 16:37:40.287: INFO: 
I0215 16:37:49.271] Logging kubelet events for node e2e-test-prow-minion-group-kdsn
I0215 16:37:49.271] Feb 15 16:37:40.391: INFO: 
I0215 16:37:49.271] Logging pods the kubelet thinks is on node e2e-test-prow-minion-group-kdsn
I0215 16:37:49.271] Feb 15 16:37:40.492: INFO: metadata-proxy-v0.1-hhmq4 started at 2020-02-15 16:30:25 +0000 UTC (0+2 container statuses recorded)
I0215 16:37:49.271] Feb 15 16:37:40.492: INFO: 	Container metadata-proxy ready: true, restart count 0
... skipping 36 lines ...
I0215 16:37:49.278]     [Testpattern: Dynamic PV (ext3)] volumes
I0215 16:37:49.278]     /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:128
I0215 16:37:49.278]       should be mountable [AfterEach]
I0215 16:37:49.279]       /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/volumes.go:169
I0215 16:37:49.279] 
I0215 16:37:49.279]       Persistent Volume pvc-a14b2dab-5010-11ea-b739-42010a800002 not deleted by dynamic provisioner
I0215 16:37:49.279]       Expected error:
I0215 16:37:49.279]           <*errors.errorString | 0xc0008c2f40>: {
I0215 16:37:49.280]               s: "PersistentVolume pvc-a14b2dab-5010-11ea-b739-42010a800002 still exists within 5m0s",
I0215 16:37:49.280]           }
I0215 16:37:49.280]           PersistentVolume pvc-a14b2dab-5010-11ea-b739-42010a800002 still exists within 5m0s
I0215 16:37:49.280]       not to have occurred
I0215 16:37:49.280] 
... skipping 20 lines ...
I0215 16:39:06.004] 
I0215 16:39:06.004] Specify --start=43560 in the next get-serial-port-output invocation to get only the new output starting from here.
I0215 16:39:06.004] scp: /var/log/cluster-autoscaler.log*: No such file or directory
I0215 16:39:06.004] scp: /var/log/fluentd.log*: No such file or directory
I0215 16:39:06.004] scp: /var/log/kubelet.cov*: No such file or directory
I0215 16:39:06.004] scp: /var/log/startupscript.log*: No such file or directory
I0215 16:39:06.004] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0215 16:39:06.005] Dumping logs from nodes locally to '/workspace/_artifacts'
I0215 16:39:06.005] Detecting nodes in the cluster
I0215 16:39:06.005] Changing logfiles to be world-readable for download
I0215 16:39:06.005] Changing logfiles to be world-readable for download
I0215 16:39:06.005] Changing logfiles to be world-readable for download
I0215 16:39:06.005] Copying 'kube-proxy.log fluentd.log node-problem-detector.log kubelet.cov startupscript.log' from e2e-test-prow-minion-group-4qgb
... skipping 6 lines ...
I0215 16:39:06.006] 
I0215 16:39:06.006] Specify --start=56917 in the next get-serial-port-output invocation to get only the new output starting from here.
I0215 16:39:06.007] scp: /var/log/fluentd.log*: No such file or directory
I0215 16:39:06.007] scp: /var/log/node-problem-detector.log*: No such file or directory
I0215 16:39:06.007] scp: /var/log/kubelet.cov*: No such file or directory
I0215 16:39:06.007] scp: /var/log/startupscript.log*: No such file or directory
I0215 16:39:06.007] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0215 16:39:06.007] scp: /var/log/fluentd.log*: No such file or directory
I0215 16:39:06.007] scp: /var/log/node-problem-detector.log*: No such file or directory
I0215 16:39:06.008] scp: /var/log/kubelet.cov*: No such file or directory
I0215 16:39:06.008] scp: /var/log/startupscript.log*: No such file or directory
I0215 16:39:06.008] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0215 16:39:06.008] scp: /var/log/fluentd.log*: No such file or directory
I0215 16:39:06.008] scp: /var/log/node-problem-detector.log*: No such file or directory
I0215 16:39:06.008] scp: /var/log/kubelet.cov*: No such file or directory
I0215 16:39:06.008] scp: /var/log/startupscript.log*: No such file or directory
I0215 16:39:06.009] ERROR: (gcloud.compute.scp) [/usr/bin/scp] exited with return code [1].
I0215 16:39:06.009] 
I0215 16:39:06.009] 
I0215 16:39:06.009] 
I0215 16:39:06.009] Summarizing 1 Failure:
I0215 16:39:06.009] 
I0215 16:39:06.010] [Fail] [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Feature: gcePD-external] [Testpattern: Dynamic PV (ext3)] volumes [AfterEach] should be mountable 
I0215 16:39:06.010] /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:213
I0215 16:39:06.010] 
I0215 16:39:06.010] Ran 27 of 2161 Specs in 471.059 seconds
I0215 16:39:06.017] FAIL! -- 26 Passed | 1 Failed | 0 Pending | 2134 Skipped 
I0215 16:39:06.018] 
I0215 16:39:06.018] Ginkgo ran 1 suite in 7m56.055816216s
I0215 16:39:06.018] Test Suite Failed
I0215 16:39:06.033] Deleting driver
I0215 16:39:06.038] PKGDIR is /go/src/sigs.k8s.io/gcp-compute-persistent-disk-csi-driver
W0215 16:39:06.138] !!! Error in ./hack/ginkgo-e2e.sh:143
W0215 16:39:06.140]   Error in ./hack/ginkgo-e2e.sh:143. '"${ginkgo}" "${ginkgo_args[@]:+${ginkgo_args[@]}}" "${e2e_test}" -- "${auth_config[@]:+${auth_config[@]}}" --ginkgo.flakeAttempts="${FLAKE_ATTEMPTS}" --host="${KUBE_MASTER_URL}" --provider="${KUBERNETES_PROVIDER}" --gce-project="${PROJECT:-}" --gce-zone="${ZONE:-}" --gce-region="${REGION:-}" --gce-multizone="${MULTIZONE:-false}" --gke-cluster="${CLUSTER_NAME:-}" --kube-master="${KUBE_MASTER:-}" --cluster-tag="${CLUSTER_ID:-}" --cloud-config-file="${CLOUD_CONFIG:-}" --repo-root="${KUBE_ROOT}" --node-instance-group="${NODE_INSTANCE_GROUP:-}" --prefix="${KUBE_GCE_INSTANCE_PREFIX:-e2e}" --network="${KUBE_GCE_NETWORK:-${KUBE_GKE_NETWORK:-e2e}}" --node-tag="${NODE_TAG:-}" --master-tag="${MASTER_TAG:-}" --cluster-monitoring-mode="${KUBE_ENABLE_CLUSTER_MONITORING:-standalone}" --prometheus-monitoring="${KUBE_ENABLE_PROMETHEUS_MONITORING:-false}" --dns-domain="${KUBE_DNS_DOMAIN:-cluster.local}" ${KUBE_CONTAINER_RUNTIME:+"--container-runtime=${KUBE_CONTAINER_RUNTIME}"} ${MASTER_OS_DISTRIBUTION:+"--master-os-distro=${MASTER_OS_DISTRIBUTION}"} ${NODE_OS_DISTRIBUTION:+"--node-os-distro=${NODE_OS_DISTRIBUTION}"} ${NUM_NODES:+"--num-nodes=${NUM_NODES}"} ${E2E_REPORT_DIR:+"--report-dir=${E2E_REPORT_DIR}"} ${E2E_REPORT_PREFIX:+"--report-prefix=${E2E_REPORT_PREFIX}"} "${@:-}"' exited with status 1
W0215 16:39:06.140] Call stack:
W0215 16:39:06.140]   1: ./hack/ginkgo-e2e.sh:143 main(...)
W0215 16:39:06.140] Exiting with status 1
W0215 16:39:06.141] 2020/02/15 16:39:06 process.go:155: Step './hack/ginkgo-e2e.sh --ginkgo.focus=CSI.*gcePD-external --ginkgo.skip=\[Disruptive\]|\[Serial\]|kubelet.*down --report-dir=/workspace/_artifacts' finished in 7m57.155380385s
W0215 16:39:06.141] 2020/02/15 16:39:06 main.go:316: Something went wrong: encountered 1 errors: [error during ./hack/ginkgo-e2e.sh --ginkgo.focus=CSI.*gcePD-external --ginkgo.skip=\[Disruptive\]|\[Serial\]|kubelet.*down --report-dir=/workspace/_artifacts: exit status 1]
W0215 16:39:06.238] I0215 16:39:06.238017   67415 loader.go:359] Config loaded from file /root/.kube/config
W0215 16:39:06.239] I0215 16:39:06.239105   67415 loader.go:359] Config loaded from file /root/.kube/config
W0215 16:39:06.240] I0215 16:39:06.240592   67415 loader.go:359] Config loaded from file /root/.kube/config
W0215 16:39:06.240] I0215 16:39:06.240868   67415 decoder.go:224] decoding stream as YAML
W0215 16:39:06.241] I0215 16:39:06.241822   67415 loader.go:359] Config loaded from file /root/.kube/config
W0215 16:39:06.247] I0215 16:39:06.247479   67415 loader.go:359] Config loaded from file /root/.kube/config
... skipping 266 lines ...
W0215 16:39:19.578] NODE_NAMES=e2e-test-prow-minion-group-4qgb e2e-test-prow-minion-group-8xzw e2e-test-prow-minion-group-kdsn
I0215 16:39:19.679] Bringing down cluster
W0215 16:39:22.614] Deleting Managed Instance Group...
W0215 16:42:09.026] ....................................Deleted [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-01/zones/us-central1-b/instanceGroupManagers/e2e-test-prow-minion-group].
W0215 16:42:09.027] done.
W0215 16:42:13.034] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-01/global/instanceTemplates/e2e-test-prow-minion-template].
I0215 16:42:17.593] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-test-prow-master, port: 2379, result: 0
I0215 16:42:18.827] {"message":"Internal Server Error"}Removing etcd replica, name: e2e-test-prow-master, port: 4002, result: 0
W0215 16:42:24.048] Updated [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-01/zones/us-central1-b/instances/e2e-test-prow-master].
W0215 16:44:39.515] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-01/zones/us-central1-b/instances/e2e-test-prow-master].
W0215 16:44:51.763] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-01/global/firewalls/e2e-test-prow-master-https].
W0215 16:44:52.757] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-01/global/firewalls/e2e-test-prow-master-etcd].
W0215 16:44:54.007] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-01/global/firewalls/e2e-test-prow-minion-all].
W0215 16:44:59.939] Deleted [https://www.googleapis.com/compute/v1/projects/k8s-boskos-gce-project-01/regions/us-central1/addresses/e2e-test-prow-master-ip].
... skipping 8 lines ...
I0215 16:46:03.584] Property "clusters.k8s-boskos-gce-project-01_e2e-test-prow" unset.
I0215 16:46:03.738] Property "users.k8s-boskos-gce-project-01_e2e-test-prow" unset.
I0215 16:46:03.882] Property "users.k8s-boskos-gce-project-01_e2e-test-prow-basic-auth" unset.
I0215 16:46:04.026] Property "contexts.k8s-boskos-gce-project-01_e2e-test-prow" unset.
I0215 16:46:04.032] Cleared config for k8s-boskos-gce-project-01_e2e-test-prow from /root/.kube/config
I0215 16:46:04.032] Done
W0215 16:46:04.696] F0215 16:46:04.696193    1183 main.go:68] Failed to run integration test: failed to run tests: failed to run tests on e2e cluster: exit status 1
W0215 16:46:04.700] Traceback (most recent call last):
W0215 16:46:04.700]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 50, in <module>
W0215 16:46:04.700]     main(ARGS.env, ARGS.cmd + ARGS.args)
W0215 16:46:04.700]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 41, in main
W0215 16:46:04.700]     check(*cmd)
W0215 16:46:04.701]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0215 16:46:04.701]     subprocess.check_call(cmd)
W0215 16:46:04.701]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0215 16:46:04.701]     raise CalledProcessError(retcode, cmd)
W0215 16:46:04.701] subprocess.CalledProcessError: Command '('test/run-k8s-integration.sh',)' returned non-zero exit status 255
E0215 16:46:04.707] Command failed
I0215 16:46:04.707] process 434 exited with code 1 after 34.2m
E0215 16:46:04.707] FAIL: ci-gcp-compute-persistent-disk-csi-driver-release-0-4-k8s-1-13-5-integration
I0215 16:46:04.707] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0215 16:46:05.261] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0215 16:46:05.323] process 68367 exited with code 0 after 0.0m
I0215 16:46:05.323] Call:  gcloud config get-value account
I0215 16:46:05.766] process 68380 exited with code 0 after 0.0m
I0215 16:46:05.766] Will upload results to gs://kubernetes-jenkins/logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0215 16:46:05.766] Upload result and artifacts...
I0215 16:46:05.766] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/logs/ci-gcp-compute-persistent-disk-csi-driver-release-0-4-k8s-1-13-5-integration/1228713438126018560
I0215 16:46:05.767] Call:  gsutil ls gs://kubernetes-jenkins/logs/ci-gcp-compute-persistent-disk-csi-driver-release-0-4-k8s-1-13-5-integration/1228713438126018560/artifacts
W0215 16:46:06.862] CommandException: One or more URLs matched no objects.
E0215 16:46:07.005] Command failed
I0215 16:46:07.005] process 68393 exited with code 1 after 0.0m
W0215 16:46:07.005] Remote dir gs://kubernetes-jenkins/logs/ci-gcp-compute-persistent-disk-csi-driver-release-0-4-k8s-1-13-5-integration/1228713438126018560/artifacts not exist yet
I0215 16:46:07.006] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/logs/ci-gcp-compute-persistent-disk-csi-driver-release-0-4-k8s-1-13-5-integration/1228713438126018560/artifacts
I0215 16:46:10.979] process 68538 exited with code 0 after 0.1m
W0215 16:46:10.980] metadata path /workspace/_artifacts/metadata.json does not exist
W0215 16:46:10.980] metadata not found or invalid, init with empty metadata
... skipping 15 lines ...