This job view page is being replaced by Spyglass soon. Check out the new job view.
ResultFAILURE
Tests 1 failed / 39 succeeded
Started2020-03-29 13:22
Elapsed2h0m
Revisionmaster
links{u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/702c7e5d-e0fc-4a5a-9e04-79f59969e7b5/targets/test'}}
resultstorehttps://source.cloud.google.com/results/invocations/702c7e5d-e0fc-4a5a-9e04-79f59969e7b5/targets/test

Test Failures


Kubernetes e2e suite [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] 6.65s

go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-api\-machinery\]\sGarbage\scollector\sshould\skeep\sthe\src\saround\suntil\sall\sits\spods\sare\sdeleted\sif\sthe\sdeleteOptions\ssays\sso\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Mar 29 15:22:09.901: Couldn't delete ns: "gc-2471": Operation cannot be fulfilled on namespaces "gc-2471": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system. (&errors.StatusError{ErrStatus:v1.Status{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ListMeta:v1.ListMeta{SelfLink:"", ResourceVersion:"", Continue:"", RemainingItemCount:(*int64)(nil)}, Status:"Failure", Message:"Operation cannot be fulfilled on namespaces \"gc-2471\": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.", Reason:"Conflict", Details:(*v1.StatusDetails)(0xc00306a720), Code:409}})
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
				
				Click to see stdout/stderrfrom junit_08.xml

Filter through log files | View test history on testgrid


Show 39 Passed Tests

Show 930 Skipped Tests

Error lines from build-log.txt

... skipping 344 lines ...
Trying to find master named 'e2e-56589a6d6a-b49e0-master'
Looking for address 'e2e-56589a6d6a-b49e0-master-ip'
Using master: e2e-56589a6d6a-b49e0-master (external IP: 34.82.124.78)
Waiting up to 300 seconds for cluster initialization.

  This will continually check to see if the API for kubernetes is reachable.
  This may time out if there was some uncaught error during start up.

..........Kubernetes cluster created.
Cluster "k8s-jkns-gci-gce-1-3_e2e-56589a6d6a-b49e0" set.
User "k8s-jkns-gci-gce-1-3_e2e-56589a6d6a-b49e0" set.
Context "k8s-jkns-gci-gce-1-3_e2e-56589a6d6a-b49e0" created.
Switched to context "k8s-jkns-gci-gce-1-3_e2e-56589a6d6a-b49e0".
... skipping 40 lines ...
e2e-56589a6d6a-b49e0-minion-group-gpdt         Ready                      <none>   5m7s   v1.15.12-beta.0.9+8de4013f5815f7
e2e-56589a6d6a-b49e0-minion-group-r0j4         Ready                      <none>   5m3s   v1.15.12-beta.0.9+8de4013f5815f7
e2e-56589a6d6a-b49e0-windows-node-group-159w   Ready                      <none>   54s    v1.15.12-beta.0.9+8de4013f5815f7
e2e-56589a6d6a-b49e0-windows-node-group-cg7s   Ready                      <none>   11s    v1.15.12-beta.0.9+8de4013f5815f7
e2e-56589a6d6a-b49e0-windows-node-group-h3pd   Ready                      <none>   73s    v1.15.12-beta.0.9+8de4013f5815f7
Validate output:
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
Cluster validation succeeded
Done, listing cluster services:
... skipping 297 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: hostPathSymlink]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 405 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: nfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254

      Driver nfs doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:147
------------------------------
... skipping 399 lines ...
Mar 29 14:46:35.512: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Mar 29 14:46:35.512: INFO: Running '/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.124.78 --kubeconfig=/workspace/.kube/config describe pod redis-master-jcf4v --namespace=kubectl-982'
Mar 29 14:46:35.811: INFO: stderr: ""
Mar 29 14:46:35.812: INFO: stdout: "Name:           redis-master-jcf4v\nNamespace:      kubectl-982\nPriority:       0\nNode:           e2e-56589a6d6a-b49e0-windows-node-group-cg7s/10.40.0.3\nStart Time:     Sun, 29 Mar 2020 14:46:16 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    <none>\nStatus:         Running\nIP:             10.64.1.5\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://6174446209e96f0373e910a2cceacdfc30b2630b983cf17ae2977779274198f2\n    Image:          e2eteam/redis:1.0\n    Image ID:       docker-pullable://e2eteam/redis@sha256:8c9fd0656356dcad4ed60c16931ea928cc6dc97a4a100cdf7a26f7446fa5c9f1\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Sun, 29 Mar 2020 14:46:29 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-zxspl (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-zxspl:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-zxspl\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                                                   Message\n  ----    ------     ----  ----                                                   -------\n  Normal  Scheduled  19s   default-scheduler                                      Successfully assigned kubectl-982/redis-master-jcf4v to e2e-56589a6d6a-b49e0-windows-node-group-cg7s\n  Normal  Pulled     11s   kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Container image \"e2eteam/redis:1.0\" already present on machine\n  Normal  Created    11s   kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Created container redis-master\n  Normal  Started    6s    kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Started container redis-master\n"
Mar 29 14:46:35.812: INFO: Running '/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.124.78 --kubeconfig=/workspace/.kube/config describe rc redis-master --namespace=kubectl-982'
Mar 29 14:46:36.168: INFO: stderr: ""
Mar 29 14:46:36.168: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-982\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        e2eteam/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  20s   replication-controller  Created pod: redis-master-jcf4v\n"
Mar 29 14:46:36.168: INFO: Running '/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.124.78 --kubeconfig=/workspace/.kube/config describe service redis-master --namespace=kubectl-982'
Mar 29 14:46:36.507: INFO: stderr: ""
Mar 29 14:46:36.507: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-982\nLabels:            app=redis\n                   role=master\nAnnotations:       <none>\nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.0.119.128\nPort:              <unset>  6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.64.1.5:6379\nSession Affinity:  None\nEvents:            <none>\n"
Mar 29 14:46:36.581: INFO: Running '/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.124.78 --kubeconfig=/workspace/.kube/config describe node e2e-56589a6d6a-b49e0-master'
Mar 29 14:46:36.984: INFO: stderr: ""
Mar 29 14:46:36.984: INFO: stdout: "Name:               e2e-56589a6d6a-b49e0-master\nRoles:              <none>\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/instance-type=n1-standard-1\n                    beta.kubernetes.io/metadata-proxy-ready=true\n                    beta.kubernetes.io/os=linux\n                    cloud.google.com/metadata-proxy-ready=true\n                    failure-domain.beta.kubernetes.io/region=us-west1\n                    failure-domain.beta.kubernetes.io/zone=us-west1-b\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=e2e-56589a6d6a-b49e0-master\n                    kubernetes.io/os=linux\nAnnotations:        node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sun, 29 Mar 2020 13:27:34 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\n                    node-under-test=false:NoSchedule\n                    node.kubernetes.io/unschedulable:NoSchedule\nUnschedulable:      true\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sun, 29 Mar 2020 13:27:36 +0000   Sun, 29 Mar 2020 13:27:36 +0000   RouteCreated                 NodeController create implicit route\n  MemoryPressure       False   Sun, 29 Mar 2020 14:46:30 +0000   Sun, 29 Mar 2020 13:27:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Sun, 29 Mar 2020 14:46:30 +0000   Sun, 29 Mar 2020 13:27:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Sun, 29 Mar 2020 14:46:30 +0000   Sun, 29 Mar 2020 13:27:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Sun, 29 Mar 2020 14:46:30 +0000   Sun, 29 Mar 2020 13:27:35 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:   10.40.0.2\n  ExternalIP:   34.82.124.78\n  InternalDNS:  e2e-56589a6d6a-b49e0-master.c.k8s-jkns-gci-gce-1-3.internal\n  Hostname:     e2e-56589a6d6a-b49e0-master.c.k8s-jkns-gci-gce-1-3.internal\nCapacity:\n attachable-volumes-gce-pd:  127\n cpu:                        1\n ephemeral-storage:          16293736Ki\n hugepages-2Mi:              0\n memory:                     3787516Ki\n pods:                       110\nAllocatable:\n attachable-volumes-gce-pd:  127\n cpu:                        1\n ephemeral-storage:          15016307073\n hugepages-2Mi:              0\n memory:                     3531516Ki\n pods:                       110\nSystem Info:\n Machine ID:                 579325f333e650a5a1ff1e77ca2b682f\n System UUID:                579325F3-33E6-50A5-A1FF-1E77CA2B682F\n Boot ID:                    92101c33-fc6e-4c8c-914c-9ded50c759c8\n Kernel Version:             4.14.94+\n OS Image:                   Container-Optimized OS from Google\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.3\n Kubelet Version:            v1.15.12-beta.0.9+8de4013f5815f7\n Kube-Proxy Version:         v1.15.12-beta.0.9+8de4013f5815f7\nPodCIDR:                     10.64.0.0/24\nProviderID:                  gce://k8s-jkns-gci-gce-1-3/us-west1-b/e2e-56589a6d6a-b49e0-master\nNon-terminated Pods:         (10 in total)\n  Namespace                  Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                                   ------------  ----------  ---------------  -------------  ---\n  kube-system                etcd-empty-dir-cleanup-e2e-56589a6d6a-b49e0-master     0 (0%)        0 (0%)      0 (0%)           0 (0%)         78m\n  kube-system                etcd-server-e2e-56589a6d6a-b49e0-master                200m (20%)    0 (0%)      0 (0%)           0 (0%)         78m\n  kube-system                etcd-server-events-e2e-56589a6d6a-b49e0-master         100m (10%)    0 (0%)      0 (0%)           0 (0%)         78m\n  kube-system                fluentd-gcp-v3.2.0-jd7nl                               100m (10%)    1 (100%)    200Mi (5%)       500Mi (14%)    78m\n  kube-system                kube-addon-manager-e2e-56589a6d6a-b49e0-master         5m (0%)       0 (0%)      50Mi (1%)        0 (0%)         77m\n  kube-system                kube-apiserver-e2e-56589a6d6a-b49e0-master             250m (25%)    0 (0%)      0 (0%)           0 (0%)         78m\n  kube-system                kube-controller-manager-e2e-56589a6d6a-b49e0-master    200m (20%)    0 (0%)      0 (0%)           0 (0%)         78m\n  kube-system                kube-scheduler-e2e-56589a6d6a-b49e0-master             75m (7%)      0 (0%)      0 (0%)           0 (0%)         78m\n  kube-system                l7-lb-controller-v1.2.3-e2e-56589a6d6a-b49e0-master    10m (1%)      0 (0%)      50Mi (1%)        0 (0%)         78m\n  kube-system                metadata-proxy-v0.1-7vkdh                              32m (3%)      32m (3%)    45Mi (1%)        45Mi (1%)      79m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource                   Requests     Limits\n  --------                   --------     ------\n  cpu                        972m (97%)   1032m (103%)\n  memory                     345Mi (10%)  545Mi (15%)\n  ephemeral-storage          0 (0%)       0 (0%)\n  attachable-volumes-gce-pd  0            0\nEvents:                      <none>\n"
... skipping 624 lines ...
Mar 29 14:49:55.902: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of linux-container in pod-4c299433-6291-4aa4-b883-a0738ed98804
Mar 29 14:49:57.283: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vz 8.8.8.8 53 -w 10] Namespace:hybrid-network-1112 PodName:pod-4c299433-6291-4aa4-b883-a0738ed98804 ContainerName:linux-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:49:57.283: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity to www.google.com from Windows
STEP: checking connectivity of windows-container in pod-49d0441d-8967-4000-9e59-3d7eaea17855
Mar 29 14:49:57.667: INFO: ExecWithOptions {Command:[cmd /c curl.exe www.google.com --connect-timeout 10 --fail] Namespace:hybrid-network-1112 PodName:pod-49d0441d-8967-4000-9e59-3d7eaea17855 ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:49:57.667: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-49d0441d-8967-4000-9e59-3d7eaea17855
Mar 29 14:50:05.418: INFO: ExecWithOptions {Command:[cmd /c curl.exe www.google.com --connect-timeout 10 --fail] Namespace:hybrid-network-1112 PodName:pod-49d0441d-8967-4000-9e59-3d7eaea17855 ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:50:05.418: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-49d0441d-8967-4000-9e59-3d7eaea17855
Mar 29 14:50:07.532: INFO: ExecWithOptions {Command:[cmd /c curl.exe www.google.com --connect-timeout 10 --fail] Namespace:hybrid-network-1112 PodName:pod-49d0441d-8967-4000-9e59-3d7eaea17855 ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:50:07.532: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity from Linux to Windows
STEP: checking connectivity of linux-container in pod-4c299433-6291-4aa4-b883-a0738ed98804
Mar 29 14:50:13.045: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vz 10.64.1.24 80 -w 10] Namespace:hybrid-network-1112 PodName:pod-4c299433-6291-4aa4-b883-a0738ed98804 ContainerName:linux-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:50:13.045: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of linux-container in pod-4c299433-6291-4aa4-b883-a0738ed98804
... skipping 16 lines ...
Mar 29 14:50:21.476: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of linux-container in pod-4c299433-6291-4aa4-b883-a0738ed98804
Mar 29 14:50:22.936: INFO: ExecWithOptions {Command:[/bin/sh -c nc -vz 10.64.1.24 80 -w 10] Namespace:hybrid-network-1112 PodName:pod-4c299433-6291-4aa4-b883-a0738ed98804 ContainerName:linux-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:50:22.936: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity from Windows to Linux
STEP: checking connectivity of windows-container in pod-49d0441d-8967-4000-9e59-3d7eaea17855
Mar 29 14:50:23.312: INFO: ExecWithOptions {Command:[cmd /c curl.exe 10.64.5.9 --connect-timeout 10 --fail] Namespace:hybrid-network-1112 PodName:pod-49d0441d-8967-4000-9e59-3d7eaea17855 ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:50:23.312: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-49d0441d-8967-4000-9e59-3d7eaea17855
Mar 29 14:50:25.231: INFO: ExecWithOptions {Command:[cmd /c curl.exe 10.64.5.9 --connect-timeout 10 --fail] Namespace:hybrid-network-1112 PodName:pod-49d0441d-8967-4000-9e59-3d7eaea17855 ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:50:25.231: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-49d0441d-8967-4000-9e59-3d7eaea17855
Mar 29 14:50:26.598: INFO: ExecWithOptions {Command:[cmd /c curl.exe 10.64.5.9 --connect-timeout 10 --fail] Namespace:hybrid-network-1112 PodName:pod-49d0441d-8967-4000-9e59-3d7eaea17855 ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:50:26.598: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-49d0441d-8967-4000-9e59-3d7eaea17855
Mar 29 14:50:28.345: INFO: ExecWithOptions {Command:[cmd /c curl.exe 10.64.5.9 --connect-timeout 10 --fail] Namespace:hybrid-network-1112 PodName:pod-49d0441d-8967-4000-9e59-3d7eaea17855 ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:50:28.345: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-49d0441d-8967-4000-9e59-3d7eaea17855
Mar 29 14:50:30.102: INFO: ExecWithOptions {Command:[cmd /c curl.exe 10.64.5.9 --connect-timeout 10 --fail] Namespace:hybrid-network-1112 PodName:pod-49d0441d-8967-4000-9e59-3d7eaea17855 ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:50:30.102: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-49d0441d-8967-4000-9e59-3d7eaea17855
Mar 29 14:50:31.934: INFO: ExecWithOptions {Command:[cmd /c curl.exe 10.64.5.9 --connect-timeout 10 --fail] Namespace:hybrid-network-1112 PodName:pod-49d0441d-8967-4000-9e59-3d7eaea17855 ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 14:50:31.934: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-windows] Hybrid cluster network
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Mar 29 14:50:33.313: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready
STEP: Destroying namespace "hybrid-network-1112" for this suite.
Mar 29 14:51:17.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 478 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: local][LocalVolumeType: tmpfs]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 11 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: emptydir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254

      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 113 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: gluster]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254

      Driver gluster doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 862 lines ...
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Mar 29 14:46:43.349: INFO: Creating deployment "nginx-deployment"
Mar 29 14:46:43.391: INFO: Waiting for observed generation 1
Mar 29 14:46:45.471: INFO: Waiting for all required pods to come up
Mar 29 14:46:45.507: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Mar 29 14:51:45.624: INFO: Unexpected error occurred: failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
[AfterEach] [sig-apps] Deployment
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Mar 29 14:51:45.665: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6794,SelfLink:/apis/apps/v1/namespaces/deployment-6794/deployments/nginx-deployment,UID:1c644017-e8ca-4d2b-9075-a949c45123b3,ResourceVersion:12403,Generation:1,CreationTimestamp:2020-03-29 14:46:43 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*10,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx e2eteam/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:10,UpdatedReplicas:10,AvailableReplicas:8,UnavailableReplicas:2,Conditions:[{Available True 2020-03-29 14:48:32 +0000 UTC 2020-03-29 14:48:32 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-03-29 14:48:32 +0000 UTC 2020-03-29 14:46:43 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-68b476495c" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Mar 29 14:51:45.702: INFO: New ReplicaSet "nginx-deployment-68b476495c" of Deployment "nginx-deployment":
... skipping 64 lines ...
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:47:21 +0000 UTC - event for nginx-deployment-68b476495c-rcjrg: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Started: Started container nginx
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:47:21 +0000 UTC - event for nginx-deployment-68b476495c-t5ll2: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Started: Started container nginx
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:47:22 +0000 UTC - event for nginx-deployment-68b476495c-7fm8x: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Started: Started container nginx
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:47:23 +0000 UTC - event for nginx-deployment-68b476495c-5zvts: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Started: Started container nginx
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:47:24 +0000 UTC - event for nginx-deployment-68b476495c-sl2bj: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Started: Started container nginx
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:47:25 +0000 UTC - event for nginx-deployment-68b476495c-fr5jt: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Started: Started container nginx
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:48:59 +0000 UTC - event for nginx-deployment-68b476495c-fkvnn: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-159w} FailedCreatePodSandBox: Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "nginx-deployment-68b476495c-fkvnn": operation timeout: context deadline exceeded
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:49:03 +0000 UTC - event for nginx-deployment-68b476495c-2wvct: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-h3pd} FailedCreatePodSandBox: Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "nginx-deployment-68b476495c-2wvct": operation timeout: context deadline exceeded
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:49:09 +0000 UTC - event for nginx-deployment-68b476495c-2wvct: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-h3pd} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:50:35 +0000 UTC - event for nginx-deployment-68b476495c-2wvct: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-h3pd} Pulled: Container image "e2eteam/nginx:1.14-alpine" already present on machine
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:50:40 +0000 UTC - event for nginx-deployment-68b476495c-2wvct: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-h3pd} Created: Created container nginx
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:51:03 +0000 UTC - event for nginx-deployment-68b476495c-fkvnn: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-159w} SandboxChanged: Pod sandbox changed, it will be killed and re-created.
Mar 29 14:51:45.786: INFO: At 2020-03-29 14:51:43 +0000 UTC - event for nginx-deployment-68b476495c-2wvct: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-h3pd} Started: Started container nginx
Mar 29 14:51:45.841: INFO: POD                                NODE                                          PHASE    GRACE  CONDITIONS
... skipping 7 lines ...
Mar 29 14:51:45.841: INFO: nginx-deployment-68b476495c-rcjrg  e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:46:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:47:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:47:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:46:43 +0000 UTC  }]
Mar 29 14:51:45.841: INFO: nginx-deployment-68b476495c-sl2bj  e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:46:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:47:34 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:47:34 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:46:43 +0000 UTC  }]
Mar 29 14:51:45.841: INFO: nginx-deployment-68b476495c-t5ll2  e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:46:43 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:48:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:48:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:46:43 +0000 UTC  }]
Mar 29 14:51:45.841: INFO: 
Mar 29 14:51:45.929: INFO: 
Logging node info for node e2e-56589a6d6a-b49e0-master
Mar 29 14:51:45.966: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-56589a6d6a-b49e0-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-56589a6d6a-b49e0-master,UID:d5ab5bb0-a768-4e53-9eb4-bf2e89b3d053,ResourceVersion:13613,Generation:0,CreationTimestamp:2020-03-29 13:27:34 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-1,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,cloud.google.com/metadata-proxy-ready: true,failure-domain.beta.kubernetes.io/region: us-west1,failure-domain.beta.kubernetes.io/zone: us-west1-b,kubernetes.io/arch: amd64,kubernetes.io/hostname: e2e-56589a6d6a-b49e0-master,kubernetes.io/os: linux,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-3/us-west1-b/e2e-56589a6d6a-b49e0-master,Unschedulable:true,Taints:[{node-under-test false NoSchedule <nil>} {node-role.kubernetes.io/master  NoSchedule <nil>} {node.kubernetes.io/unschedulable  NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878416384 0} {<nil>} 3787516Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616272384 0} {<nil>} 3531516Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2020-03-29 13:27:36 +0000 UTC 2020-03-29 13:27:36 +0000 UTC RouteCreated NodeController create implicit route} {MemoryPressure False 2020-03-29 14:51:31 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-29 14:51:31 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-29 14:51:31 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-29 14:51:31 +0000 UTC 2020-03-29 13:27:35 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.40.0.2} {ExternalIP 34.82.124.78} {InternalDNS e2e-56589a6d6a-b49e0-master.c.k8s-jkns-gci-gce-1-3.internal} {Hostname e2e-56589a6d6a-b49e0-master.c.k8s-jkns-gci-gce-1-3.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:579325f333e650a5a1ff1e77ca2b682f,SystemUUID:579325F3-33E6-50A5-A1FF-1E77CA2B682F,BootID:92101c33-fc6e-4c8c-914c-9ded50c759c8,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.15.12-beta.0.9+8de4013f5815f7,KubeProxyVersion:v1.15.12-beta.0.9+8de4013f5815f7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd@sha256:02cd751eef4f7dcea7986e58d51903dab39baf4606f636b50891f30190abce2c k8s.gcr.io/etcd:3.3.10-1] 295923553} {[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[k8s.gcr.io/kube-apiserver:v1.15.12-beta.0.9_8de4013f5815f7] 247635661} {[k8s.gcr.io/kube-controller-manager:v1.15.12-beta.0.9_8de4013f5815f7] 198439763} {[k8s.gcr.io/kube-scheduler:v1.15.12-beta.0.9_8de4013f5815f7] 95032786} {[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2] 83076028} {[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:13e18f320022be5cf7c1c38a6207d02c07d603b52c1dd47b7c69e9324bd3c641 k8s.gcr.io/etcd-empty-dir-cleanup:3.3.10.1] 73958468} {[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:14f14351a03038b238232e60850a9cfa0dffbed0590321ef84216a432accc1ca k8s.gcr.io/ingress-gce-glbc-amd64:v1.2.3] 71797285} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Mar 29 14:51:45.966: INFO: 
Logging kubelet events for node e2e-56589a6d6a-b49e0-master
Mar 29 14:51:46.018: INFO: 
Logging pods the kubelet thinks is on node e2e-56589a6d6a-b49e0-master
Mar 29 14:51:46.073: INFO: l7-lb-controller-v1.2.3-e2e-56589a6d6a-b49e0-master started at 2020-03-29 13:27:10 +0000 UTC (0+1 container statuses recorded)
Mar 29 14:51:46.073: INFO: 	Container l7-lb-controller ready: true, restart count 0
... skipping 18 lines ...
Mar 29 14:51:46.073: INFO: etcd-empty-dir-cleanup-e2e-56589a6d6a-b49e0-master started at 2020-03-29 13:26:48 +0000 UTC (0+1 container statuses recorded)
Mar 29 14:51:46.073: INFO: 	Container etcd-empty-dir-cleanup ready: true, restart count 0
Mar 29 14:51:46.225: INFO: 
Latency metrics for node e2e-56589a6d6a-b49e0-master
Mar 29 14:51:46.225: INFO: 
Logging node info for node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 14:51:46.260: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-56589a6d6a-b49e0-minion-group-gpdt,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-56589a6d6a-b49e0-minion-group-gpdt,UID:4336d0f3-4a2a-4987-9730-dae0e5f5ed7f,ResourceVersion:13382,Generation:0,CreationTimestamp:2020-03-29 13:27:34 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,cloud.google.com/metadata-proxy-ready: true,failure-domain.beta.kubernetes.io/region: us-west1,failure-domain.beta.kubernetes.io/zone: us-west1-b,kubernetes.io/arch: amd64,kubernetes.io/hostname: e2e-56589a6d6a-b49e0-minion-group-gpdt,kubernetes.io/os: linux,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-3/us-west1-b/e2e-56589a6d6a-b49e0-minion-group-gpdt,Unschedulable:false,Taints:[{node-under-test false NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7841861632 0} {<nil>} 7658068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7579717632 0} {<nil>} 7402068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{FrequentUnregisterNetDevice False 2020-03-29 14:51:10 +0000 UTC 2020-03-29 13:32:24 +0000 UTC UnregisterNetDevice node is functioning properly} {FrequentKubeletRestart False 2020-03-29 14:51:10 +0000 UTC 2020-03-29 13:32:23 +0000 UTC FrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-03-29 14:51:10 +0000 UTC 2020-03-29 13:32:24 +0000 UTC FrequentDockerRestart docker is functioning properly} {FrequentContainerdRestart False 2020-03-29 14:51:10 +0000 UTC 2020-03-29 13:32:26 +0000 UTC FrequentContainerdRestart containerd is functioning properly} {CorruptDockerOverlay2 False 2020-03-29 14:51:10 +0000 UTC 2020-03-29 13:32:23 +0000 UTC CorruptDockerOverlay2 docker overlay2 is functioning properly} {KernelDeadlock False 2020-03-29 14:51:10 +0000 UTC 2020-03-29 13:27:22 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-03-29 14:51:10 +0000 UTC 2020-03-29 13:27:22 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {NetworkUnavailable False 2020-03-29 13:27:36 +0000 UTC 2020-03-29 13:27:36 +0000 UTC RouteCreated NodeController create implicit route} {MemoryPressure False 2020-03-29 14:51:11 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-29 14:51:11 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-29 14:51:11 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-29 14:51:11 +0000 UTC 2020-03-29 13:27:35 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.40.0.7} {ExternalIP 35.199.151.49} {InternalDNS e2e-56589a6d6a-b49e0-minion-group-gpdt.c.k8s-jkns-gci-gce-1-3.internal} {Hostname e2e-56589a6d6a-b49e0-minion-group-gpdt.c.k8s-jkns-gci-gce-1-3.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a2b8e2cf3123f7be41c73b0b8063bda3,SystemUUID:A2B8E2CF-3123-F7BE-41C7-3B0B8063BDA3,BootID:d500ec22-7c72-4e95-b3a7-bea66228a5a0,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.15.12-beta.0.9+8de4013f5815f7,KubeProxyVersion:v1.15.12-beta.0.9+8de4013f5815f7,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[k8s.gcr.io/kube-proxy:v1.15.12-beta.0.9_8de4013f5815f7] 95529390} {[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2] 90498960} {[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1] 76016169} {[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:0abeb6a79ad5aec10e920110446a97fb75180da8680094acb6715de62507f4b0 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.6.0] 47668785} {[k8s.gcr.io/event-exporter@sha256:06acf489ab092b4fb49273e426549a52c0fcd1dbcb67e03d5935b5ee1a899c3e k8s.gcr.io/event-exporter:v0.2.5] 47261019} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/metrics-server-amd64@sha256:4ca116565ff6a46e582bada50ba3550f95b368db1d2415829241a565a6c38e2a k8s.gcr.io/metrics-server-amd64:v0.3.3] 39933796} {[k8s.gcr.io/addon-resizer@sha256:8075ed6db9baad249d9cf2656c0ecaad8d87133baf20286b1953dfb3fb06e75d k8s.gcr.io/addon-resizer:1.8.5] 35110823} {[nginx@sha256:0fd68ec4b64b8dbb2bef1f1a5de9d47b658afd3635dc9c45bf0cbeac46e72101 nginx:1.15-alpine] 16087791} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[k8s.gcr.io/defaultbackend-amd64@sha256:4dc5e07c8ca4e23bddb3153737d7b8c556e5fb2f29c4558b7cd6e6df99c512c7 k8s.gcr.io/defaultbackend-amd64:1.5] 5132544} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Mar 29 14:51:46.260: INFO: 
Logging kubelet events for node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 14:51:46.295: INFO: 
Logging pods the kubelet thinks is on node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 14:51:46.337: INFO: fluentd-gcp-scaler-6848d689fb-zx4mz started at 2020-03-29 13:27:36 +0000 UTC (0+1 container statuses recorded)
Mar 29 14:51:46.337: INFO: 	Container fluentd-gcp-scaler ready: true, restart count 0
... skipping 15 lines ...
Mar 29 14:51:46.337: INFO: l7-default-backend-84c9fcfbb-4snp9 started at 2020-03-29 13:27:34 +0000 UTC (0+1 container statuses recorded)
Mar 29 14:51:46.337: INFO: 	Container default-http-backend ready: true, restart count 0
Mar 29 14:51:46.482: INFO: 
Latency metrics for node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 14:51:46.482: INFO: 
Logging node info for node e2e-56589a6d6a-b49e0-minion-group-r0j4
Mar 29 14:51:46.517: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-56589a6d6a-b49e0-minion-group-r0j4,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-56589a6d6a-b49e0-minion-group-r0j4,UID:ee79214f-2b5e-4675-a36f-58d66a766ae3,ResourceVersion:13465,Generation:0,CreationTimestamp:2020-03-29 13:27:38 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,cloud.google.com/metadata-proxy-ready: true,failure-domain.beta.kubernetes.io/region: us-west1,failure-domain.beta.kubernetes.io/zone: us-west1-b,kubernetes.io/arch: amd64,kubernetes.io/hostname: e2e-56589a6d6a-b49e0-minion-group-r0j4,kubernetes.io/os: linux,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-3/us-west1-b/e2e-56589a6d6a-b49e0-minion-group-r0j4,Unschedulable:false,Taints:[{node-under-test false NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7841861632 0} {<nil>} 7658068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7579717632 0} {<nil>} 7402068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{KernelDeadlock False 2020-03-29 14:51:16 +0000 UTC 2020-03-29 13:27:26 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-03-29 14:51:16 +0000 UTC 2020-03-29 13:27:26 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {FrequentKubeletRestart False 2020-03-29 14:51:16 +0000 UTC 2020-03-29 13:32:27 +0000 UTC FrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-03-29 14:51:16 +0000 UTC 2020-03-29 13:32:28 +0000 UTC FrequentDockerRestart docker is functioning properly} {FrequentContainerdRestart False 2020-03-29 14:51:16 +0000 UTC 2020-03-29 13:32:29 +0000 UTC FrequentContainerdRestart containerd is functioning properly} {FrequentUnregisterNetDevice False 2020-03-29 14:51:16 +0000 UTC 2020-03-29 13:32:27 +0000 UTC UnregisterNetDevice node is functioning properly} {CorruptDockerOverlay2 False 2020-03-29 14:51:16 +0000 UTC 2020-03-29 13:32:27 +0000 UTC CorruptDockerOverlay2 docker overlay2 is functioning properly} {NetworkUnavailable False 2020-03-29 13:27:38 +0000 UTC 2020-03-29 13:27:38 +0000 UTC RouteCreated NodeController create implicit route} {MemoryPressure False 2020-03-29 14:51:15 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-29 14:51:15 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-29 14:51:15 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-29 14:51:15 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.40.0.6} {ExternalIP 34.82.133.227} {InternalDNS e2e-56589a6d6a-b49e0-minion-group-r0j4.c.k8s-jkns-gci-gce-1-3.internal} {Hostname e2e-56589a6d6a-b49e0-minion-group-r0j4.c.k8s-jkns-gci-gce-1-3.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f3e8990f7574f970185a10b5e4a640ce,SystemUUID:F3E8990F-7574-F970-185A-10B5E4A640CE,BootID:a00e4867-5ea0-49d4-800f-3cb47e454179,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.15.12-beta.0.9+8de4013f5815f7,KubeProxyVersion:v1.15.12-beta.0.9+8de4013f5815f7,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[k8s.gcr.io/kube-proxy:v1.15.12-beta.0.9_8de4013f5815f7] 95529390} {[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1] 76016169} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[k8s.gcr.io/metrics-server-amd64@sha256:4ca116565ff6a46e582bada50ba3550f95b368db1d2415829241a565a6c38e2a k8s.gcr.io/metrics-server-amd64:v0.3.3] 39933796} {[k8s.gcr.io/addon-resizer@sha256:8075ed6db9baad249d9cf2656c0ecaad8d87133baf20286b1953dfb3fb06e75d k8s.gcr.io/addon-resizer:1.8.5] 35110823} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Mar 29 14:51:46.517: INFO: 
Logging kubelet events for node e2e-56589a6d6a-b49e0-minion-group-r0j4
Mar 29 14:51:46.555: INFO: 
Logging pods the kubelet thinks is on node e2e-56589a6d6a-b49e0-minion-group-r0j4
Mar 29 14:51:46.614: INFO: heapster-v1.6.0-beta.1-664d95949f-vkxpw started at 2020-03-29 13:27:58 +0000 UTC (0+2 container statuses recorded)
Mar 29 14:51:46.614: INFO: 	Container heapster ready: true, restart count 0
... skipping 156 lines ...
• Failure [464.617 seconds]
[sig-apps] Deployment
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance] [It]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

  error in waiting for pods to come up: failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
  Unexpected error:
      <*errors.errorString | 0xc00159fa60>: {
          s: "failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]",
      }
      failed to wait for pods running: [timed out waiting for the condition timed out waiting for the condition]
  occurred

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:870
------------------------------
SSSSSSS
------------------------------
... skipping 668 lines ...
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-4382
STEP: Creating statefulset with conflicting port in namespace statefulset-4382
STEP: Waiting until pod test-pod will start running in namespace statefulset-4382
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4382
Mar 29 14:53:52.716: INFO: Observed stateful pod in namespace: statefulset-4382, name: ss-0, uid: 8f3582b0-0a88-4b19-943b-5e27fed4b66a, status phase: Pending. Waiting for statefulset controller to delete.
Mar 29 14:53:54.266: INFO: Observed stateful pod in namespace: statefulset-4382, name: ss-0, uid: 8f3582b0-0a88-4b19-943b-5e27fed4b66a, status phase: Failed. Waiting for statefulset controller to delete.
Mar 29 14:53:54.273: INFO: Observed stateful pod in namespace: statefulset-4382, name: ss-0, uid: 8f3582b0-0a88-4b19-943b-5e27fed4b66a, status phase: Failed. Waiting for statefulset controller to delete.
Mar 29 14:53:54.278: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4382
STEP: Removing pod with conflicting port in namespace statefulset-4382
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4382 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Mar 29 14:55:26.280: INFO: Deleting all statefulset in ns statefulset-4382
... skipping 139 lines ...
Mar 29 14:56:48.303: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:48.490: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:48.554: INFO: Unable to read jessie_udp@google.com from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:48.606: INFO: Unable to read jessie_tcp@google.com from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:48.646: INFO: Unable to read jessie_udp@PodARecord from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:48.686: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:48.686: INFO: Lookups using dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@google.com wheezy_tcp@google.com wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@google.com jessie_tcp@google.com jessie_udp@PodARecord jessie_tcp@PodARecord]

Mar 29 14:56:53.818: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:53.860: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:53.917: INFO: Unable to read wheezy_udp@google.com from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:53.956: INFO: Unable to read wheezy_tcp@google.com from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:53.995: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:54.034: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:54.447: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:54.503: INFO: Unable to read jessie_udp@google.com from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:54.543: INFO: Unable to read jessie_tcp@google.com from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:54.581: INFO: Unable to read jessie_udp@PodARecord from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:54.902: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:54.902: INFO: Lookups using dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@google.com wheezy_tcp@google.com wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@google.com jessie_tcp@google.com jessie_udp@PodARecord jessie_tcp@PodARecord]

Mar 29 14:56:58.726: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:59.201: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:59.318: INFO: Unable to read jessie_udp@google.com from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:59.356: INFO: Unable to read jessie_tcp@google.com from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:59.396: INFO: Unable to read jessie_udp@PodARecord from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:59.434: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:56:59.434: INFO: Lookups using dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@PodARecord jessie_udp@google.com jessie_tcp@google.com jessie_udp@PodARecord jessie_tcp@PodARecord]

Mar 29 14:57:04.537: INFO: Unable to read jessie_udp@PodARecord from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:57:04.580: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba: the server could not find the requested resource (get pods dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba)
Mar 29 14:57:04.580: INFO: Lookups using dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba failed for: [jessie_udp@PodARecord jessie_tcp@PodARecord]

Mar 29 14:57:09.388: INFO: DNS probes using dns-3676/dns-test-b8f7ec89-8bbf-487d-95b9-3861571524ba succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
... skipping 596 lines ...
Mar 29 14:58:25.553: INFO: Pod "volume-prep-provisioning-4827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 1m27.681264085s
STEP: Saw pod success
Mar 29 14:58:25.553: INFO: Pod "volume-prep-provisioning-4827" satisfied condition "success or failure"
Mar 29 14:58:25.553: INFO: Deleting pod "volume-prep-provisioning-4827" in namespace "provisioning-4827"
Mar 29 14:58:25.601: INFO: Wait up to 5m0s for pod "volume-prep-provisioning-4827" to be fully deleted
STEP: Creating pod pod-subpath-test-gcepd-dynamicpv-xfnj
STEP: Checking for subpath error in container status
Mar 29 14:59:25.764: INFO: Deleting pod "pod-subpath-test-gcepd-dynamicpv-xfnj" in namespace "provisioning-4827"
Mar 29 14:59:25.811: INFO: Wait up to 5m0s for pod "pod-subpath-test-gcepd-dynamicpv-xfnj" to be fully deleted
STEP: Deleting pod
Mar 29 14:59:25.849: INFO: Deleting pod "pod-subpath-test-gcepd-dynamicpv-xfnj" in namespace "provisioning-4827"
STEP: Deleting pvc
Mar 29 14:59:25.887: INFO: Deleting PersistentVolumeClaim "pvc-jlfvn"
... skipping 34 lines ...

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 29 14:58:03.406: INFO: File jessie_udp@dns-test-service-3.dns-8444.svc.cluster.local from pod  dns-8444/dns-test-d1ffdfa7-92cf-4e47-87f8-145f1241f772 contains '' instead of 'foo.example.com.'
Mar 29 14:58:03.406: INFO: Lookups using dns-8444/dns-test-d1ffdfa7-92cf-4e47-87f8-145f1241f772 failed for: [jessie_udp@dns-test-service-3.dns-8444.svc.cluster.local]

Mar 29 14:58:08.444: INFO: File wheezy_udp@dns-test-service-3.dns-8444.svc.cluster.local from pod  dns-8444/dns-test-d1ffdfa7-92cf-4e47-87f8-145f1241f772 contains '' instead of 'foo.example.com.'
Mar 29 14:58:08.585: INFO: File jessie_udp@dns-test-service-3.dns-8444.svc.cluster.local from pod  dns-8444/dns-test-d1ffdfa7-92cf-4e47-87f8-145f1241f772 contains '' instead of 'foo.example.com.'
Mar 29 14:58:08.585: INFO: Lookups using dns-8444/dns-test-d1ffdfa7-92cf-4e47-87f8-145f1241f772 failed for: [wheezy_udp@dns-test-service-3.dns-8444.svc.cluster.local jessie_udp@dns-test-service-3.dns-8444.svc.cluster.local]

Mar 29 14:58:13.444: INFO: File wheezy_udp@dns-test-service-3.dns-8444.svc.cluster.local from pod  dns-8444/dns-test-d1ffdfa7-92cf-4e47-87f8-145f1241f772 contains '' instead of 'foo.example.com.'
Mar 29 14:58:13.480: INFO: Lookups using dns-8444/dns-test-d1ffdfa7-92cf-4e47-87f8-145f1241f772 failed for: [wheezy_udp@dns-test-service-3.dns-8444.svc.cluster.local]

Mar 29 14:58:18.482: INFO: DNS probes using dns-test-d1ffdfa7-92cf-4e47-87f8-145f1241f772 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8444.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8444.svc.cluster.local; sleep 1; done
... skipping 14 lines ...

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Mar 29 14:59:11.569: INFO: File wheezy_udp@dns-test-service-3.dns-8444.svc.cluster.local from pod  dns-8444/dns-test-ebf5a7df-8026-4cb7-8245-c72a0124b201 contains '' instead of '10.0.147.120'
Mar 29 14:59:11.640: INFO: Lookups using dns-8444/dns-test-ebf5a7df-8026-4cb7-8245-c72a0124b201 failed for: [wheezy_udp@dns-test-service-3.dns-8444.svc.cluster.local]

Mar 29 14:59:16.676: INFO: File wheezy_udp@dns-test-service-3.dns-8444.svc.cluster.local from pod  dns-8444/dns-test-ebf5a7df-8026-4cb7-8245-c72a0124b201 contains '' instead of '10.0.147.120'
Mar 29 14:59:16.713: INFO: File jessie_udp@dns-test-service-3.dns-8444.svc.cluster.local from pod  dns-8444/dns-test-ebf5a7df-8026-4cb7-8245-c72a0124b201 contains '' instead of '10.0.147.120'
Mar 29 14:59:16.713: INFO: Lookups using dns-8444/dns-test-ebf5a7df-8026-4cb7-8245-c72a0124b201 failed for: [wheezy_udp@dns-test-service-3.dns-8444.svc.cluster.local jessie_udp@dns-test-service-3.dns-8444.svc.cluster.local]

Mar 29 14:59:21.678: INFO: File wheezy_udp@dns-test-service-3.dns-8444.svc.cluster.local from pod  dns-8444/dns-test-ebf5a7df-8026-4cb7-8245-c72a0124b201 contains '' instead of '10.0.147.120'
Mar 29 14:59:21.716: INFO: Lookups using dns-8444/dns-test-ebf5a7df-8026-4cb7-8245-c72a0124b201 failed for: [wheezy_udp@dns-test-service-3.dns-8444.svc.cluster.local]

Mar 29 14:59:26.754: INFO: DNS probes using dns-test-ebf5a7df-8026-4cb7-8245-c72a0124b201 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
... skipping 796 lines ...
Mar 29 15:02:25.174: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-8720.svc.cluster.local from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:25.216: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-8720.svc.cluster.local from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:25.310: INFO: Unable to read jessie_udp@PodARecord from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:25.441: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:25.518: INFO: Unable to read 10.0.51.109_udp@PTR from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:25.573: INFO: Unable to read 10.0.51.109_tcp@PTR from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:25.573: INFO: Lookups using dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2 failed for: [wheezy_udp@dns-test-service.dns-8720.svc.cluster.local wheezy_tcp@dns-test-service.dns-8720.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.0.51.109_udp@PTR 10.0.51.109_tcp@PTR jessie_udp@dns-test-service.dns-8720.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-8720.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-8720.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-8720.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-8720.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.0.51.109_udp@PTR 10.0.51.109_tcp@PTR]

Mar 29 15:02:30.643: INFO: Unable to read wheezy_udp@dns-test-service.dns-8720.svc.cluster.local from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:30.699: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8720.svc.cluster.local from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:31.408: INFO: Unable to read jessie_udp@dns-test-service.dns-8720.svc.cluster.local from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:32.244: INFO: Lookups using dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2 failed for: [wheezy_udp@dns-test-service.dns-8720.svc.cluster.local wheezy_tcp@dns-test-service.dns-8720.svc.cluster.local jessie_udp@dns-test-service.dns-8720.svc.cluster.local]

Mar 29 15:02:35.673: INFO: Unable to read wheezy_udp@dns-test-service.dns-8720.svc.cluster.local from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:35.720: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8720.svc.cluster.local from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:37.311: INFO: Lookups using dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2 failed for: [wheezy_udp@dns-test-service.dns-8720.svc.cluster.local wheezy_tcp@dns-test-service.dns-8720.svc.cluster.local]

Mar 29 15:02:40.770: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8720.svc.cluster.local from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:41.870: INFO: Lookups using dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2 failed for: [wheezy_tcp@dns-test-service.dns-8720.svc.cluster.local]

Mar 29 15:02:45.733: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8720.svc.cluster.local from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:47.361: INFO: Lookups using dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2 failed for: [wheezy_tcp@dns-test-service.dns-8720.svc.cluster.local]

Mar 29 15:02:50.659: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8720.svc.cluster.local from pod dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2: the server could not find the requested resource (get pods dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2)
Mar 29 15:02:51.524: INFO: Lookups using dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2 failed for: [wheezy_tcp@dns-test-service.dns-8720.svc.cluster.local]

Mar 29 15:02:56.471: INFO: DNS probes using dns-8720/dns-test-731963b9-5a71-4109-a781-6ed3d792f5e2 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
... skipping 640 lines ...
STEP: cleaning the environment after gcepd
Mar 29 15:03:32.831: INFO: Deleting pod "gcepd-client" in namespace "volume-105"
Mar 29 15:03:32.872: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
STEP: Deleting pv and pvc
Mar 29 15:04:18.953: INFO: Deleting PersistentVolumeClaim "pvc-mjdtc"
Mar 29 15:04:18.995: INFO: Deleting PersistentVolume "gcepd-4d75z"
Mar 29 15:04:20.573: INFO: error deleting PD "e2e-56589a6d6a-b49e0-e151c5ee-939c-4aef-945a-d45d26a18cab": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-e151c5ee-939c-4aef-945a-d45d26a18cab' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-159w', resourceInUseByAnotherResource
Mar 29 15:04:20.573: INFO: Couldn't delete PD "e2e-56589a6d6a-b49e0-e151c5ee-939c-4aef-945a-d45d26a18cab", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-e151c5ee-939c-4aef-945a-d45d26a18cab' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-159w', resourceInUseByAnotherResource
Mar 29 15:04:27.866: INFO: Successfully deleted PD "e2e-56589a6d6a-b49e0-e151c5ee-939c-4aef-945a-d45d26a18cab".
Mar 29 15:04:27.866: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Mar 29 15:04:27.866: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready
STEP: Destroying namespace "volume-105" for this suite.
... skipping 198 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: vSphere]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver vSphere doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:147
------------------------------
... skipping 480 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: local][LocalVolumeType: dir-link]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 1031 lines ...
Mar 29 15:08:04.260: INFO: Trying to get logs from node e2e-56589a6d6a-b49e0-windows-node-group-h3pd pod exec-volume-test-gcepd-mnpw container exec-container-gcepd-mnpw: <nil>
STEP: delete the pod
Mar 29 15:08:04.428: INFO: Waiting for pod exec-volume-test-gcepd-mnpw to disappear
Mar 29 15:08:04.466: INFO: Pod exec-volume-test-gcepd-mnpw no longer exists
STEP: Deleting pod exec-volume-test-gcepd-mnpw
Mar 29 15:08:04.466: INFO: Deleting pod "exec-volume-test-gcepd-mnpw" in namespace "volume-2327"
Mar 29 15:08:05.739: INFO: error deleting PD "e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-h3pd', resourceInUseByAnotherResource
Mar 29 15:08:05.739: INFO: Couldn't delete PD "e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-h3pd', resourceInUseByAnotherResource
Mar 29 15:08:12.082: INFO: error deleting PD "e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-h3pd', resourceInUseByAnotherResource
Mar 29 15:08:12.082: INFO: Couldn't delete PD "e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-h3pd', resourceInUseByAnotherResource
Mar 29 15:08:18.646: INFO: error deleting PD "e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-h3pd', resourceInUseByAnotherResource
Mar 29 15:08:18.646: INFO: Couldn't delete PD "e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-h3pd', resourceInUseByAnotherResource
Mar 29 15:08:25.874: INFO: Successfully deleted PD "e2e-56589a6d6a-b49e0-29a64e4f-2f7c-46a9-8e75-b97e2157eb66".
Mar 29 15:08:25.874: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Mar 29 15:08:25.874: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready
STEP: Destroying namespace "volume-2327" for this suite.
... skipping 25 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: emptydir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver emptydir doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 74 lines ...
[BeforeEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Mar 29 15:08:33.536: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-cdbbfc5e-5c8b-44b4-812b-9e2fa5405b93
[AfterEach] [sig-api-machinery] Secrets
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Mar 29 15:08:33.767: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready
STEP: Destroying namespace "secrets-2286" for this suite.
Mar 29 15:08:39.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 29 15:08:41.355: INFO: namespace secrets-2286 deletion completed in 7.550165583s


• [SLOW TEST:7.820 seconds]
[sig-api-machinery] Secrets
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[BeforeEach] [sig-storage] Projected secret
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 224 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: cinder]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254

      Driver cinder doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:147
------------------------------
... skipping 344 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: cinder]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver cinder doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:147
------------------------------
... skipping 118 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: local][LocalVolumeType: dir-link-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 161 lines ...
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Mar 29 15:08:20.846: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename provisioning
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail if subpath directory is outside the volume [Slow]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
Mar 29 15:08:21.152: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Mar 29 15:08:38.559: INFO: Node name not specified for getVolumeOpCounts, falling back to listing nodes from API Server
Mar 29 15:08:54.500: INFO: Creating resource for dynamic PV
STEP: creating a StorageClass provisioning-8449-gcepd-sc5jxbv
STEP: creating a claim
STEP: Creating pod pod-subpath-test-gcepd-dynamicpv-lv76
STEP: Checking for subpath error in container status
Mar 29 15:10:18.759: INFO: Deleting pod "pod-subpath-test-gcepd-dynamicpv-lv76" in namespace "provisioning-8449"
Mar 29 15:10:18.805: INFO: Wait up to 5m0s for pod "pod-subpath-test-gcepd-dynamicpv-lv76" to be fully deleted
STEP: Deleting pod
Mar 29 15:11:08.887: INFO: Deleting pod "pod-subpath-test-gcepd-dynamicpv-lv76" in namespace "provisioning-8449"
STEP: Deleting pvc
Mar 29 15:11:08.928: INFO: Deleting PersistentVolumeClaim "pvc-lz8nc"
... skipping 12 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: gcepd]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216
------------------------------
S
------------------------------
[BeforeEach] version v1
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
... skipping 510 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: local][LocalVolumeType: dir-link]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 9 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: local][LocalVolumeType: dir-bindmounted]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 286 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 292 lines ...
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Mar 29 15:13:24.481: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b00f0520-b5d6-459f-92c8-9b2cc1621f20"
Mar 29 15:13:24.481: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b00f0520-b5d6-459f-92c8-9b2cc1621f20" in namespace "pods-7721" to be "terminated due to deadline exceeded"
Mar 29 15:13:24.519: INFO: Pod "pod-update-activedeadlineseconds-b00f0520-b5d6-459f-92c8-9b2cc1621f20": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 37.581963ms
Mar 29 15:13:24.519: INFO: Pod "pod-update-activedeadlineseconds-b00f0520-b5d6-459f-92c8-9b2cc1621f20" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Mar 29 15:13:24.519: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready
STEP: Destroying namespace "pods-7721" for this suite.
Mar 29 15:13:30.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 341 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: local][LocalVolumeType: block]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 238 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: local][LocalVolumeType: dir]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 197 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: azure]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254

      Driver azure doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:147
------------------------------
... skipping 11 lines ...
[sig-storage] CSI Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: csi-hostpath-v0]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/csi_volumes.go:58
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254

      Driver csi-hostpath-v0 doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:147
------------------------------
... skipping 466 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: azure]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath directory is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:216

      Driver azure doesn't support ntfs -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:147
------------------------------
... skipping 1080 lines ...
Mar 29 15:04:39.678: INFO: Pod "client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m49.578096447s
Mar 29 15:04:41.716: INFO: Pod "client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m51.615895102s
Mar 29 15:04:43.754: INFO: Pod "client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m53.654441098s
Mar 29 15:04:45.792: INFO: Pod "client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m55.692122082s
Mar 29 15:04:47.830: INFO: Pod "client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m57.729901997s
Mar 29 15:04:49.867: INFO: Pod "client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4m59.767710382s
Mar 29 15:04:51.949: INFO: Failed to get logs from node "e2e-56589a6d6a-b49e0-windows-node-group-cg7s" pod "client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0" container "env3cont": the server rejected our request for an unknown reason (get pods client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0)
STEP: delete the pod
Mar 29 15:04:51.992: INFO: Waiting for pod client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0 to disappear
Mar 29 15:04:52.031: INFO: Pod client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0 still exists
Mar 29 15:04:54.031: INFO: Waiting for pod client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0 to disappear
Mar 29 15:04:54.069: INFO: Pod client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0 still exists
Mar 29 15:04:56.031: INFO: Waiting for pod client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0 to disappear
... skipping 185 lines ...
Mar 29 15:07:52.146: INFO: At 2020-03-29 14:59:44 +0000 UTC - event for server-envvars-86abcabc-370d-4538-b14f-f1513948b18c: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Created: Created container srv
Mar 29 15:07:52.146: INFO: At 2020-03-29 14:59:46 +0000 UTC - event for server-envvars-86abcabc-370d-4538-b14f-f1513948b18c: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Started: Started container srv
Mar 29 15:07:52.146: INFO: At 2020-03-29 14:59:50 +0000 UTC - event for client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0: {default-scheduler } Scheduled: Successfully assigned pods-9633/client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0 to e2e-56589a6d6a-b49e0-windows-node-group-cg7s
Mar 29 15:07:52.146: INFO: At 2020-03-29 14:59:54 +0000 UTC - event for client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Pulled: Container image "e2eteam/busybox:1.29" already present on machine
Mar 29 15:07:52.146: INFO: At 2020-03-29 14:59:54 +0000 UTC - event for client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Created: Created container env3cont
Mar 29 15:07:52.146: INFO: At 2020-03-29 14:59:57 +0000 UTC - event for client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Started: Started container env3cont
Mar 29 15:07:52.146: INFO: At 2020-03-29 15:01:58 +0000 UTC - event for client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} FailedSync: error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Mar 29 15:07:52.146: INFO: At 2020-03-29 15:06:09 +0000 UTC - event for client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod pods-9633/client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0
Mar 29 15:07:52.146: INFO: At 2020-03-29 15:06:09 +0000 UTC - event for server-envvars-86abcabc-370d-4538-b14f-f1513948b18c: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod pods-9633/server-envvars-86abcabc-370d-4538-b14f-f1513948b18c
Mar 29 15:07:52.183: INFO: POD                                                  NODE                                          PHASE    GRACE  CONDITIONS
Mar 29 15:07:52.183: INFO: client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0  e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:59:50 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:59:50 +0000 UTC ContainersNotReady containers with unready status: [env3cont]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:59:50 +0000 UTC ContainersNotReady containers with unready status: [env3cont]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:59:50 +0000 UTC  }]
Mar 29 15:07:52.183: INFO: server-envvars-86abcabc-370d-4538-b14f-f1513948b18c  e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:59:41 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:59:48 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:59:48 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-03-29 14:59:41 +0000 UTC  }]
Mar 29 15:07:52.183: INFO: 
Mar 29 15:07:52.259: INFO: 
Logging node info for node e2e-56589a6d6a-b49e0-master
Mar 29 15:07:52.297: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-56589a6d6a-b49e0-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-56589a6d6a-b49e0-master,UID:d5ab5bb0-a768-4e53-9eb4-bf2e89b3d053,ResourceVersion:19270,Generation:0,CreationTimestamp:2020-03-29 13:27:34 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-1,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,cloud.google.com/metadata-proxy-ready: true,failure-domain.beta.kubernetes.io/region: us-west1,failure-domain.beta.kubernetes.io/zone: us-west1-b,kubernetes.io/arch: amd64,kubernetes.io/hostname: e2e-56589a6d6a-b49e0-master,kubernetes.io/os: linux,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-3/us-west1-b/e2e-56589a6d6a-b49e0-master,Unschedulable:true,Taints:[{node-under-test false NoSchedule <nil>} {node-role.kubernetes.io/master  NoSchedule <nil>} {node.kubernetes.io/unschedulable  NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878416384 0} {<nil>} 3787516Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616272384 0} {<nil>} 3531516Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2020-03-29 13:27:36 +0000 UTC 2020-03-29 13:27:36 +0000 UTC RouteCreated NodeController create implicit route} {MemoryPressure False 2020-03-29 15:07:32 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-29 15:07:32 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-29 15:07:32 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-29 15:07:32 +0000 UTC 2020-03-29 13:27:35 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.40.0.2} {ExternalIP 34.82.124.78} {InternalDNS e2e-56589a6d6a-b49e0-master.c.k8s-jkns-gci-gce-1-3.internal} {Hostname e2e-56589a6d6a-b49e0-master.c.k8s-jkns-gci-gce-1-3.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:579325f333e650a5a1ff1e77ca2b682f,SystemUUID:579325F3-33E6-50A5-A1FF-1E77CA2B682F,BootID:92101c33-fc6e-4c8c-914c-9ded50c759c8,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.15.12-beta.0.9+8de4013f5815f7,KubeProxyVersion:v1.15.12-beta.0.9+8de4013f5815f7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd@sha256:02cd751eef4f7dcea7986e58d51903dab39baf4606f636b50891f30190abce2c k8s.gcr.io/etcd:3.3.10-1] 295923553} {[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[k8s.gcr.io/kube-apiserver:v1.15.12-beta.0.9_8de4013f5815f7] 247635661} {[k8s.gcr.io/kube-controller-manager:v1.15.12-beta.0.9_8de4013f5815f7] 198439763} {[k8s.gcr.io/kube-scheduler:v1.15.12-beta.0.9_8de4013f5815f7] 95032786} {[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2] 83076028} {[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:13e18f320022be5cf7c1c38a6207d02c07d603b52c1dd47b7c69e9324bd3c641 k8s.gcr.io/etcd-empty-dir-cleanup:3.3.10.1] 73958468} {[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:14f14351a03038b238232e60850a9cfa0dffbed0590321ef84216a432accc1ca k8s.gcr.io/ingress-gce-glbc-amd64:v1.2.3] 71797285} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Mar 29 15:07:52.297: INFO: 
Logging kubelet events for node e2e-56589a6d6a-b49e0-master
Mar 29 15:07:52.335: INFO: 
Logging pods the kubelet thinks is on node e2e-56589a6d6a-b49e0-master
Mar 29 15:07:52.389: INFO: etcd-server-e2e-56589a6d6a-b49e0-master started at 2020-03-29 13:26:48 +0000 UTC (0+1 container statuses recorded)
Mar 29 15:07:52.389: INFO: 	Container etcd-container ready: true, restart count 0
... skipping 18 lines ...
Mar 29 15:07:52.389: INFO: 	Container fluentd-gcp ready: true, restart count 0
Mar 29 15:07:52.389: INFO: 	Container prometheus-to-sd-exporter ready: true, restart count 0
Mar 29 15:07:52.527: INFO: 
Latency metrics for node e2e-56589a6d6a-b49e0-master
Mar 29 15:07:52.527: INFO: 
Logging node info for node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 15:07:52.565: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-56589a6d6a-b49e0-minion-group-gpdt,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-56589a6d6a-b49e0-minion-group-gpdt,UID:4336d0f3-4a2a-4987-9730-dae0e5f5ed7f,ResourceVersion:19218,Generation:0,CreationTimestamp:2020-03-29 13:27:34 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,cloud.google.com/metadata-proxy-ready: true,failure-domain.beta.kubernetes.io/region: us-west1,failure-domain.beta.kubernetes.io/zone: us-west1-b,kubernetes.io/arch: amd64,kubernetes.io/hostname: e2e-56589a6d6a-b49e0-minion-group-gpdt,kubernetes.io/os: linux,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-3/us-west1-b/e2e-56589a6d6a-b49e0-minion-group-gpdt,Unschedulable:false,Taints:[{node-under-test false NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7841861632 0} {<nil>} 7658068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7579717632 0} {<nil>} 7402068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{FrequentKubeletRestart False 2020-03-29 15:07:17 +0000 UTC 2020-03-29 13:32:23 +0000 UTC FrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-03-29 15:07:17 +0000 UTC 2020-03-29 13:32:24 +0000 UTC FrequentDockerRestart docker is functioning properly} {FrequentContainerdRestart False 2020-03-29 15:07:17 +0000 UTC 2020-03-29 13:32:26 +0000 UTC FrequentContainerdRestart containerd is functioning properly} {CorruptDockerOverlay2 False 2020-03-29 15:07:17 +0000 UTC 2020-03-29 13:32:23 +0000 UTC CorruptDockerOverlay2 docker overlay2 is functioning properly} {KernelDeadlock False 2020-03-29 15:07:17 +0000 UTC 2020-03-29 13:27:22 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-03-29 15:07:17 +0000 UTC 2020-03-29 13:27:22 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {FrequentUnregisterNetDevice False 2020-03-29 15:07:17 +0000 UTC 2020-03-29 13:32:24 +0000 UTC UnregisterNetDevice node is functioning properly} {NetworkUnavailable False 2020-03-29 13:27:36 +0000 UTC 2020-03-29 13:27:36 +0000 UTC RouteCreated NodeController create implicit route} {MemoryPressure False 2020-03-29 15:07:12 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-29 15:07:12 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-29 15:07:12 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-29 15:07:12 +0000 UTC 2020-03-29 13:27:35 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.40.0.7} {ExternalIP 35.199.151.49} {InternalDNS e2e-56589a6d6a-b49e0-minion-group-gpdt.c.k8s-jkns-gci-gce-1-3.internal} {Hostname e2e-56589a6d6a-b49e0-minion-group-gpdt.c.k8s-jkns-gci-gce-1-3.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a2b8e2cf3123f7be41c73b0b8063bda3,SystemUUID:A2B8E2CF-3123-F7BE-41C7-3B0B8063BDA3,BootID:d500ec22-7c72-4e95-b3a7-bea66228a5a0,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.15.12-beta.0.9+8de4013f5815f7,KubeProxyVersion:v1.15.12-beta.0.9+8de4013f5815f7,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[k8s.gcr.io/kube-proxy:v1.15.12-beta.0.9_8de4013f5815f7] 95529390} {[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2] 90498960} {[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1] 76016169} {[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:0abeb6a79ad5aec10e920110446a97fb75180da8680094acb6715de62507f4b0 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.6.0] 47668785} {[k8s.gcr.io/event-exporter@sha256:06acf489ab092b4fb49273e426549a52c0fcd1dbcb67e03d5935b5ee1a899c3e k8s.gcr.io/event-exporter:v0.2.5] 47261019} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/metrics-server-amd64@sha256:4ca116565ff6a46e582bada50ba3550f95b368db1d2415829241a565a6c38e2a k8s.gcr.io/metrics-server-amd64:v0.3.3] 39933796} {[k8s.gcr.io/addon-resizer@sha256:8075ed6db9baad249d9cf2656c0ecaad8d87133baf20286b1953dfb3fb06e75d k8s.gcr.io/addon-resizer:1.8.5] 35110823} {[nginx@sha256:0fd68ec4b64b8dbb2bef1f1a5de9d47b658afd3635dc9c45bf0cbeac46e72101 nginx:1.15-alpine] 16087791} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[k8s.gcr.io/defaultbackend-amd64@sha256:4dc5e07c8ca4e23bddb3153737d7b8c556e5fb2f29c4558b7cd6e6df99c512c7 k8s.gcr.io/defaultbackend-amd64:1.5] 5132544} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Mar 29 15:07:52.565: INFO: 
Logging kubelet events for node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 15:07:52.602: INFO: 
Logging pods the kubelet thinks is on node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 15:07:52.649: INFO: fluentd-gcp-scaler-6848d689fb-zx4mz started at 2020-03-29 13:27:36 +0000 UTC (0+1 container statuses recorded)
Mar 29 15:07:52.649: INFO: 	Container fluentd-gcp-scaler ready: true, restart count 0
... skipping 15 lines ...
Mar 29 15:07:52.649: INFO: 	Container fluentd-gcp ready: true, restart count 0
Mar 29 15:07:52.649: INFO: 	Container prometheus-to-sd-exporter ready: true, restart count 0
Mar 29 15:07:52.791: INFO: 
Latency metrics for node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 15:07:52.791: INFO: 
Logging node info for node e2e-56589a6d6a-b49e0-minion-group-r0j4
Mar 29 15:07:52.829: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-56589a6d6a-b49e0-minion-group-r0j4,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-56589a6d6a-b49e0-minion-group-r0j4,UID:ee79214f-2b5e-4675-a36f-58d66a766ae3,ResourceVersion:19246,Generation:0,CreationTimestamp:2020-03-29 13:27:38 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,cloud.google.com/metadata-proxy-ready: true,failure-domain.beta.kubernetes.io/region: us-west1,failure-domain.beta.kubernetes.io/zone: us-west1-b,kubernetes.io/arch: amd64,kubernetes.io/hostname: e2e-56589a6d6a-b49e0-minion-group-r0j4,kubernetes.io/os: linux,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-3/us-west1-b/e2e-56589a6d6a-b49e0-minion-group-r0j4,Unschedulable:false,Taints:[{node-under-test false NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7841861632 0} {<nil>} 7658068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7579717632 0} {<nil>} 7402068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{FrequentDockerRestart False 2020-03-29 15:07:25 +0000 UTC 2020-03-29 13:32:28 +0000 UTC FrequentDockerRestart docker is functioning properly} {FrequentContainerdRestart False 2020-03-29 15:07:25 +0000 UTC 2020-03-29 13:32:29 +0000 UTC FrequentContainerdRestart containerd is functioning properly} {FrequentUnregisterNetDevice False 2020-03-29 15:07:25 +0000 UTC 2020-03-29 13:32:27 +0000 UTC UnregisterNetDevice node is functioning properly} {CorruptDockerOverlay2 False 2020-03-29 15:07:25 +0000 UTC 2020-03-29 13:32:27 +0000 UTC CorruptDockerOverlay2 docker overlay2 is functioning properly} {KernelDeadlock False 2020-03-29 15:07:25 +0000 UTC 2020-03-29 13:27:26 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-03-29 15:07:25 +0000 UTC 2020-03-29 13:27:26 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {FrequentKubeletRestart False 2020-03-29 15:07:25 +0000 UTC 2020-03-29 13:32:27 +0000 UTC FrequentKubeletRestart kubelet is functioning properly} {NetworkUnavailable False 2020-03-29 13:27:38 +0000 UTC 2020-03-29 13:27:38 +0000 UTC RouteCreated NodeController create implicit route} {MemoryPressure False 2020-03-29 15:07:16 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-29 15:07:16 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-29 15:07:16 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-29 15:07:16 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.40.0.6} {ExternalIP 34.82.133.227} {InternalDNS e2e-56589a6d6a-b49e0-minion-group-r0j4.c.k8s-jkns-gci-gce-1-3.internal} {Hostname e2e-56589a6d6a-b49e0-minion-group-r0j4.c.k8s-jkns-gci-gce-1-3.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f3e8990f7574f970185a10b5e4a640ce,SystemUUID:F3E8990F-7574-F970-185A-10B5E4A640CE,BootID:a00e4867-5ea0-49d4-800f-3cb47e454179,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.15.12-beta.0.9+8de4013f5815f7,KubeProxyVersion:v1.15.12-beta.0.9+8de4013f5815f7,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[k8s.gcr.io/kube-proxy:v1.15.12-beta.0.9_8de4013f5815f7] 95529390} {[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1] 76016169} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[k8s.gcr.io/metrics-server-amd64@sha256:4ca116565ff6a46e582bada50ba3550f95b368db1d2415829241a565a6c38e2a k8s.gcr.io/metrics-server-amd64:v0.3.3] 39933796} {[k8s.gcr.io/addon-resizer@sha256:8075ed6db9baad249d9cf2656c0ecaad8d87133baf20286b1953dfb3fb06e75d k8s.gcr.io/addon-resizer:1.8.5] 35110823} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Mar 29 15:07:52.829: INFO: 
Logging kubelet events for node e2e-56589a6d6a-b49e0-minion-group-r0j4
Mar 29 15:07:52.866: INFO: 
Logging pods the kubelet thinks is on node e2e-56589a6d6a-b49e0-minion-group-r0j4
Mar 29 15:07:52.912: INFO: coredns-557dcdc9f5-7t87f started at 2020-03-29 13:27:46 +0000 UTC (0+1 container statuses recorded)
Mar 29 15:07:52.912: INFO: 	Container coredns ready: true, restart count 0
... skipping 81 lines ...
[k8s.io] Pods
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance] [It]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

  wait for pod "client-envvars-e9f92c44-1803-4b93-86fc-f341e14813a0" to disappear
  Expected success, but got an error:
      <*errors.errorString | 0xc0002a18c0>: {
          s: "timed out waiting for the condition",
      }
      timed out waiting for the condition

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/pods.go:178
... skipping 639 lines ...
Mar 29 15:18:04.540: INFO: Pod "nodeport-test-p7vb8": Phase="Running", Reason="", readiness=true. Elapsed: 8.205142055s
Mar 29 15:18:04.540: INFO: Pod "nodeport-test-p7vb8" satisfied condition "running and ready"
Mar 29 15:18:04.540: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [nodeport-test-p7vb8]
STEP: creating Windows testing Pod
STEP: checking connectivity Pod to curl http://104.198.12.65:31705
STEP: checking connectivity of windows-container in pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b
Mar 29 15:18:10.707: INFO: ExecWithOptions {Command:[cmd /c curl.exe http://104.198.12.65:31705 --connect-timeout 10 --fail] Namespace:services-7726 PodName:pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 15:18:10.708: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b
Mar 29 15:18:12.174: INFO: ExecWithOptions {Command:[cmd /c curl.exe http://104.198.12.65:31705 --connect-timeout 10 --fail] Namespace:services-7726 PodName:pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 15:18:12.174: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b
Mar 29 15:18:13.516: INFO: ExecWithOptions {Command:[cmd /c curl.exe http://104.198.12.65:31705 --connect-timeout 10 --fail] Namespace:services-7726 PodName:pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 15:18:13.516: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b
Mar 29 15:18:14.861: INFO: ExecWithOptions {Command:[cmd /c curl.exe http://104.198.12.65:31705 --connect-timeout 10 --fail] Namespace:services-7726 PodName:pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 15:18:14.861: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b
Mar 29 15:18:16.224: INFO: ExecWithOptions {Command:[cmd /c curl.exe http://104.198.12.65:31705 --connect-timeout 10 --fail] Namespace:services-7726 PodName:pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 15:18:16.224: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b
Mar 29 15:18:17.587: INFO: ExecWithOptions {Command:[cmd /c curl.exe http://104.198.12.65:31705 --connect-timeout 10 --fail] Namespace:services-7726 PodName:pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 15:18:17.587: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b
Mar 29 15:18:18.919: INFO: ExecWithOptions {Command:[cmd /c curl.exe http://104.198.12.65:31705 --connect-timeout 10 --fail] Namespace:services-7726 PodName:pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 15:18:18.919: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: checking connectivity of windows-container in pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b
Mar 29 15:18:20.261: INFO: ExecWithOptions {Command:[cmd /c curl.exe http://104.198.12.65:31705 --connect-timeout 10 --fail] Namespace:services-7726 PodName:pod-991fb412-3eaf-4515-96dc-1c04eb8bd35b ContainerName:windows-container Stdin:<nil> CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Mar 29 15:18:20.261: INFO: >>> kubeConfig: /workspace/.kube/config
[AfterEach] [sig-windows] Services
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Mar 29 15:18:20.708: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready
STEP: Destroying namespace "services-7726" for this suite.
Mar 29 15:19:26.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
... skipping 65 lines ...
Mar 29 15:19:07.484: INFO: Pod exec-volume-test-gcepd-preprovisionedpv-h8bh no longer exists
STEP: Deleting pod exec-volume-test-gcepd-preprovisionedpv-h8bh
Mar 29 15:19:07.484: INFO: Deleting pod "exec-volume-test-gcepd-preprovisionedpv-h8bh" in namespace "volume-5116"
STEP: Deleting pv and pvc
Mar 29 15:19:07.522: INFO: Deleting PersistentVolumeClaim "pvc-s9llt"
Mar 29 15:19:07.571: INFO: Deleting PersistentVolume "gcepd-94mpm"
Mar 29 15:19:08.821: INFO: error deleting PD "e2e-56589a6d6a-b49e0-ad102781-39e5-4bd4-b285-92a123d03709": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-ad102781-39e5-4bd4-b285-92a123d03709' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-159w', resourceInUseByAnotherResource
Mar 29 15:19:08.821: INFO: Couldn't delete PD "e2e-56589a6d6a-b49e0-ad102781-39e5-4bd4-b285-92a123d03709", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-ad102781-39e5-4bd4-b285-92a123d03709' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-159w', resourceInUseByAnotherResource
Mar 29 15:19:15.087: INFO: error deleting PD "e2e-56589a6d6a-b49e0-ad102781-39e5-4bd4-b285-92a123d03709": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-ad102781-39e5-4bd4-b285-92a123d03709' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-159w', resourceInUseByAnotherResource
Mar 29 15:19:15.087: INFO: Couldn't delete PD "e2e-56589a6d6a-b49e0-ad102781-39e5-4bd4-b285-92a123d03709", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-ad102781-39e5-4bd4-b285-92a123d03709' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-159w', resourceInUseByAnotherResource
Mar 29 15:19:22.243: INFO: Successfully deleted PD "e2e-56589a6d6a-b49e0-ad102781-39e5-4bd4-b285-92a123d03709".
Mar 29 15:19:22.243: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Mar 29 15:19:22.243: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready
Mar 29 15:19:22.282: INFO: Condition Ready of node e2e-56589a6d6a-b49e0-windows-node-group-cg7s is false, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoSchedule 2020-03-29 15:19:16 +0000 UTC} {node.kubernetes.io/not-ready  NoExecute 2020-03-29 15:19:19 +0000 UTC}]. Failure
... skipping 57 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: hostPathSymlink]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254

      Driver hostPathSymlink doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 254 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: local][LocalVolumeType: block]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254

      Driver local doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 186 lines ...
Mar 29 15:06:14.173: INFO: Waiting for StatefulSet statefulset-7859/ss2 to complete update
Mar 29 15:06:14.173: INFO: Waiting for Pod statefulset-7859/ss2-0 to have revision ss2-7cb5fc4875 update revision ss2-7ffc74449d
Mar 29 15:06:24.173: INFO: Waiting for StatefulSet statefulset-7859/ss2 to complete update
Mar 29 15:06:24.173: INFO: Waiting for Pod statefulset-7859/ss2-0 to have revision ss2-7cb5fc4875 update revision ss2-7ffc74449d
Mar 29 15:06:24.249: INFO: Waiting for StatefulSet statefulset-7859/ss2 to complete update
Mar 29 15:06:24.249: INFO: Waiting for Pod statefulset-7859/ss2-0 to have revision ss2-7cb5fc4875 update revision ss2-7ffc74449d
Mar 29 15:06:24.249: INFO: Failed waiting for state update: timed out waiting for the condition
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Mar 29 15:06:24.287: INFO: Running '/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.124.78 --kubeconfig=/workspace/.kube/config describe po ss2-0 --namespace=statefulset-7859'
Mar 29 15:06:24.601: INFO: stderr: ""
Mar 29 15:06:24.601: INFO: stdout: "Name:                      ss2-0\nNamespace:                 statefulset-7859\nPriority:                  0\nNode:                      e2e-56589a6d6a-b49e0-windows-node-group-cg7s/10.40.0.3\nStart Time:                Sun, 29 Mar 2020 14:54:11 +0000\nLabels:                    baz=blah\n                           controller-revision-hash=ss2-7ffc74449d\n                           foo=bar\n                           statefulset.kubernetes.io/pod-name=ss2-0\nAnnotations:               <none>\nStatus:                    Terminating (lasts 5m37s)\nTermination Grace Period:  30s\nIP:                        10.64.1.45\nControlled By:             StatefulSet/ss2\nContainers:\n  nginx:\n    Container ID:   docker://bbc322b74ec6dfc4c0632de84ed389162e1da2bf44467b98e7c3755d637df3dc\n    Image:          e2eteam/nginx:1.14-alpine\n    Image ID:       docker-pullable://e2eteam/nginx@sha256:faa5de843eedbc74e17a0bbd618bc29fa306dc706ce230447aefea69d1aadb83\n    Port:           <none>\n    Host Port:      <none>\n    State:          Running\n      Started:      Sun, 29 Mar 2020 14:54:34 +0000\n    Ready:          False\n    Restart Count:  0\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-f787w (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             False \n  ContainersReady   False \n  PodScheduled      True \nVolumes:\n  default-token-f787w:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-f787w\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason     Age    From                                                   Message\n  ----     ------     ----   ----                                                   -------\n  Normal   Scheduled  12m    default-scheduler                                      Successfully assigned statefulset-7859/ss2-0 to e2e-56589a6d6a-b49e0-windows-node-group-cg7s\n  Normal   Pulled     11m    kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Container image \"e2eteam/nginx:1.14-alpine\" already present on machine\n  Normal   Created    11m    kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Created container nginx\n  Normal   Started    11m    kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Started container nginx\n  Warning  Unhealthy  6m51s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Readiness probe failed: Get http://10.64.1.45:80/index.html: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n  Normal   Killing    24s    kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Stopping container nginx\n"
Mar 29 15:06:24.601: INFO: 
Output of kubectl describe ss2-0:
Name:                      ss2-0
Namespace:                 statefulset-7859
Priority:                  0
Node:                      e2e-56589a6d6a-b49e0-windows-node-group-cg7s/10.40.0.3
... skipping 41 lines ...
  Type     Reason     Age    From                                                   Message
  ----     ------     ----   ----                                                   -------
  Normal   Scheduled  12m    default-scheduler                                      Successfully assigned statefulset-7859/ss2-0 to e2e-56589a6d6a-b49e0-windows-node-group-cg7s
  Normal   Pulled     11m    kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Container image "e2eteam/nginx:1.14-alpine" already present on machine
  Normal   Created    11m    kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Created container nginx
  Normal   Started    11m    kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Started container nginx
  Warning  Unhealthy  6m51s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Readiness probe failed: Get http://10.64.1.45:80/index.html: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  Normal   Killing    24s    kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Stopping container nginx

Mar 29 15:06:24.601: INFO: Running '/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.124.78 --kubeconfig=/workspace/.kube/config logs ss2-0 --namespace=statefulset-7859 --tail=100'
Mar 29 15:06:24.887: INFO: stderr: ""
Mar 29 15:06:24.887: INFO: stdout: ""
Mar 29 15:06:24.887: INFO: 
Last 100 log lines of ss2-0:

Mar 29 15:06:24.887: INFO: Running '/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.124.78 --kubeconfig=/workspace/.kube/config describe po ss2-1 --namespace=statefulset-7859'
Mar 29 15:06:25.197: INFO: stderr: ""
Mar 29 15:06:25.197: INFO: stdout: "Name:           ss2-1\nNamespace:      statefulset-7859\nPriority:       0\nNode:           e2e-56589a6d6a-b49e0-windows-node-group-h3pd/10.40.0.5\nStart Time:     Sun, 29 Mar 2020 14:58:16 +0000\nLabels:         baz=blah\n                controller-revision-hash=ss2-7cb5fc4875\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss2-1\nAnnotations:    <none>\nStatus:         Running\nIP:             10.64.3.11\nControlled By:  StatefulSet/ss2\nContainers:\n  nginx:\n    Container ID:   docker://564df00cfcb49b591f5af11cd326abea978b44ce8cd55a708c8023ada3804c1a\n    Image:          e2eteam/nginx:1.15-alpine\n    Image ID:       docker-pullable://e2eteam/nginx@sha256:78bf993b646d2f664053ef99dd837afa917a9aab1cc48979c3927ede59dcd9b7\n    Port:           <none>\n    Host Port:      <none>\n    State:          Running\n      Started:      Sun, 29 Mar 2020 14:59:39 +0000\n    Ready:          True\n    Restart Count:  0\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-f787w (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-f787w:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-f787w\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason     Age    From                                                   Message\n  ----     ------     ----   ----                                                   -------\n  Normal   Scheduled  8m9s   default-scheduler                                      Successfully assigned statefulset-7859/ss2-1 to e2e-56589a6d6a-b49e0-windows-node-group-h3pd\n  Normal   Pulled     7m27s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-h3pd  Container image \"e2eteam/nginx:1.15-alpine\" already present on machine\n  Normal   Created    7m24s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-h3pd  Created container nginx\n  Normal   Started    6m44s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-h3pd  Started container nginx\n  Warning  Unhealthy  6m8s   kubelet, e2e-56589a6d6a-b49e0-windows-node-group-h3pd  Readiness probe failed: Get http://10.64.3.11:80/index.html: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n"
Mar 29 15:06:25.197: INFO: 
Output of kubectl describe ss2-1:
Name:           ss2-1
Namespace:      statefulset-7859
Priority:       0
Node:           e2e-56589a6d6a-b49e0-windows-node-group-h3pd/10.40.0.5
... skipping 40 lines ...
  Type     Reason     Age    From                                                   Message
  ----     ------     ----   ----                                                   -------
  Normal   Scheduled  8m9s   default-scheduler                                      Successfully assigned statefulset-7859/ss2-1 to e2e-56589a6d6a-b49e0-windows-node-group-h3pd
  Normal   Pulled     7m27s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-h3pd  Container image "e2eteam/nginx:1.15-alpine" already present on machine
  Normal   Created    7m24s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-h3pd  Created container nginx
  Normal   Started    6m44s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-h3pd  Started container nginx
  Warning  Unhealthy  6m8s   kubelet, e2e-56589a6d6a-b49e0-windows-node-group-h3pd  Readiness probe failed: Get http://10.64.3.11:80/index.html: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

Mar 29 15:06:25.197: INFO: Running '/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.124.78 --kubeconfig=/workspace/.kube/config logs ss2-1 --namespace=statefulset-7859 --tail=100'
Mar 29 15:06:25.483: INFO: stderr: ""
Mar 29 15:06:25.483: INFO: stdout: ""
Mar 29 15:06:25.483: INFO: 
Last 100 log lines of ss2-1:

Mar 29 15:06:25.483: INFO: Running '/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.124.78 --kubeconfig=/workspace/.kube/config describe po ss2-2 --namespace=statefulset-7859'
Mar 29 15:06:25.792: INFO: stderr: ""
Mar 29 15:06:25.792: INFO: stdout: "Name:           ss2-2\nNamespace:      statefulset-7859\nPriority:       0\nNode:           e2e-56589a6d6a-b49e0-windows-node-group-cg7s/10.40.0.3\nStart Time:     Sun, 29 Mar 2020 14:57:15 +0000\nLabels:         baz=blah\n                controller-revision-hash=ss2-7cb5fc4875\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss2-2\nAnnotations:    <none>\nStatus:         Running\nIP:             10.64.1.59\nControlled By:  StatefulSet/ss2\nContainers:\n  nginx:\n    Container ID:   docker://06848154e34d007cb1994ce9cc4af95d66c6bea27981d6efd9e4002a416ee121\n    Image:          e2eteam/nginx:1.15-alpine\n    Image ID:       docker-pullable://e2eteam/nginx@sha256:78bf993b646d2f664053ef99dd837afa917a9aab1cc48979c3927ede59dcd9b7\n    Port:           <none>\n    Host Port:      <none>\n    State:          Running\n      Started:      Sun, 29 Mar 2020 14:57:34 +0000\n    Ready:          True\n    Restart Count:  0\n    Readiness:      http-get http://:80/index.html delay=0s timeout=1s period=1s #success=1 #failure=1\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-f787w (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-f787w:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-f787w\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  <none>\nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason     Age    From                                                   Message\n  ----     ------     ----   ----                                                   -------\n  Normal   Scheduled  9m10s  default-scheduler                                      Successfully assigned statefulset-7859/ss2-2 to e2e-56589a6d6a-b49e0-windows-node-group-cg7s\n  Normal   Pulled     8m56s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Container image \"e2eteam/nginx:1.15-alpine\" already present on machine\n  Normal   Created    8m56s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Created container nginx\n  Normal   Started    8m51s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Started container nginx\n  Warning  Unhealthy  6m51s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Readiness probe failed: Get http://10.64.1.59:80/index.html: net/http: request canceled (Client.Timeout exceeded while awaiting headers)\n"
Mar 29 15:06:25.792: INFO: 
Output of kubectl describe ss2-2:
Name:           ss2-2
Namespace:      statefulset-7859
Priority:       0
Node:           e2e-56589a6d6a-b49e0-windows-node-group-cg7s/10.40.0.3
... skipping 40 lines ...
  Type     Reason     Age    From                                                   Message
  ----     ------     ----   ----                                                   -------
  Normal   Scheduled  9m10s  default-scheduler                                      Successfully assigned statefulset-7859/ss2-2 to e2e-56589a6d6a-b49e0-windows-node-group-cg7s
  Normal   Pulled     8m56s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Container image "e2eteam/nginx:1.15-alpine" already present on machine
  Normal   Created    8m56s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Created container nginx
  Normal   Started    8m51s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Started container nginx
  Warning  Unhealthy  6m51s  kubelet, e2e-56589a6d6a-b49e0-windows-node-group-cg7s  Readiness probe failed: Get http://10.64.1.59:80/index.html: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

Mar 29 15:06:25.792: INFO: Running '/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.124.78 --kubeconfig=/workspace/.kube/config logs ss2-2 --namespace=statefulset-7859 --tail=100'
Mar 29 15:06:26.063: INFO: stderr: ""
Mar 29 15:06:26.063: INFO: stdout: ""
Mar 29 15:06:26.063: INFO: 
Last 100 log lines of ss2-2:
... skipping 19 lines ...
Mar 29 15:18:56.369: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Mar 29 15:19:06.369: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Mar 29 15:19:16.369: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Mar 29 15:19:26.372: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Mar 29 15:19:36.368: INFO: Waiting for stateful set status.replicas to become 0, currently 1
Mar 29 15:19:46.369: INFO: Deleting statefulset ss2
Mar 29 15:19:46.485: INFO: Unexpected error occurred: Failed to scale statefulset to 0 in 10m0s. Remaining pods:
[ss2-2: deletion 2020-03-29 15:06:56 +0000 UTC, phase Running, readiness false]
[AfterEach] [sig-apps] StatefulSet
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Collecting events from namespace "statefulset-7859".
STEP: Found 37 events.
Mar 29 15:19:46.524: INFO: At 2020-03-29 14:54:11 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-0 in StatefulSet ss2 successful
... skipping 8 lines ...
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:55:19 +0000 UTC - event for ss2-1: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Started: Started container nginx
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:55:22 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulCreate: create Pod ss2-2 in StatefulSet ss2 successful
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:55:22 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-7859/ss2-2 to e2e-56589a6d6a-b49e0-windows-node-group-cg7s
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:55:41 +0000 UTC - event for ss2-2: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Created: Created container nginx
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:55:41 +0000 UTC - event for ss2-2: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Pulled: Container image "e2eteam/nginx:1.14-alpine" already present on machine
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:55:45 +0000 UTC - event for ss2-2: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Started: Started container nginx
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:56:02 +0000 UTC - event for ss2-1: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 404
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:56:13 +0000 UTC - event for ss2-2: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Killing: Stopping container nginx
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:56:14 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-2 in StatefulSet ss2 successful
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:57:15 +0000 UTC - event for ss2-2: {default-scheduler } Scheduled: Successfully assigned statefulset-7859/ss2-2 to e2e-56589a6d6a-b49e0-windows-node-group-cg7s
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:57:29 +0000 UTC - event for ss2-2: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Created: Created container nginx
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:57:29 +0000 UTC - event for ss2-2: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Pulled: Container image "e2eteam/nginx:1.15-alpine" already present on machine
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:57:34 +0000 UTC - event for ss2-2: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Started: Started container nginx
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:57:45 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-1 in StatefulSet ss2 successful
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:57:45 +0000 UTC - event for ss2-1: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Killing: Stopping container nginx
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:58:16 +0000 UTC - event for ss2-1: {default-scheduler } Scheduled: Successfully assigned statefulset-7859/ss2-1 to e2e-56589a6d6a-b49e0-windows-node-group-h3pd
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:58:58 +0000 UTC - event for ss2-1: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-h3pd} Pulled: Container image "e2eteam/nginx:1.15-alpine" already present on machine
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:59:01 +0000 UTC - event for ss2-1: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-h3pd} Created: Created container nginx
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:59:33 +0000 UTC - event for ss2-0: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Unhealthy: Readiness probe failed: Get http://10.64.1.45:80/index.html: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:59:34 +0000 UTC - event for ss2-2: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Unhealthy: Readiness probe failed: Get http://10.64.1.59:80/index.html: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Mar 29 15:19:46.525: INFO: At 2020-03-29 14:59:41 +0000 UTC - event for ss2-1: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-h3pd} Started: Started container nginx
Mar 29 15:19:46.525: INFO: At 2020-03-29 15:00:17 +0000 UTC - event for ss2: {statefulset-controller } SuccessfulDelete: delete Pod ss2-0 in StatefulSet ss2 successful
Mar 29 15:19:46.525: INFO: At 2020-03-29 15:00:17 +0000 UTC - event for ss2-1: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-h3pd} Unhealthy: Readiness probe failed: Get http://10.64.3.11:80/index.html: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
Mar 29 15:19:46.525: INFO: At 2020-03-29 15:06:00 +0000 UTC - event for ss2-0: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Killing: Stopping container nginx
Mar 29 15:19:46.525: INFO: At 2020-03-29 15:06:09 +0000 UTC - event for ss2-0: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod statefulset-7859/ss2-0
Mar 29 15:19:46.525: INFO: At 2020-03-29 15:06:09 +0000 UTC - event for ss2-2: {taint-controller } TaintManagerEviction: Cancelling deletion of Pod statefulset-7859/ss2-2
Mar 29 15:19:46.525: INFO: At 2020-03-29 15:08:01 +0000 UTC - event for ss2-2: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-cg7s} Killing: Stopping container nginx
Mar 29 15:19:46.525: INFO: At 2020-03-29 15:18:12 +0000 UTC - event for ss2-1: {kubelet e2e-56589a6d6a-b49e0-windows-node-group-h3pd} Killing: Stopping container nginx
Mar 29 15:19:46.564: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Mar 29 15:19:46.564: INFO: 
Mar 29 15:19:46.642: INFO: 
Logging node info for node e2e-56589a6d6a-b49e0-master
Mar 29 15:19:46.679: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-56589a6d6a-b49e0-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-56589a6d6a-b49e0-master,UID:d5ab5bb0-a768-4e53-9eb4-bf2e89b3d053,ResourceVersion:22316,Generation:0,CreationTimestamp:2020-03-29 13:27:34 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-1,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,cloud.google.com/metadata-proxy-ready: true,failure-domain.beta.kubernetes.io/region: us-west1,failure-domain.beta.kubernetes.io/zone: us-west1-b,kubernetes.io/arch: amd64,kubernetes.io/hostname: e2e-56589a6d6a-b49e0-master,kubernetes.io/os: linux,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-3/us-west1-b/e2e-56589a6d6a-b49e0-master,Unschedulable:true,Taints:[{node-under-test false NoSchedule <nil>} {node-role.kubernetes.io/master  NoSchedule <nil>} {node.kubernetes.io/unschedulable  NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878416384 0} {<nil>} 3787516Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616272384 0} {<nil>} 3531516Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2020-03-29 13:27:36 +0000 UTC 2020-03-29 13:27:36 +0000 UTC RouteCreated NodeController create implicit route} {MemoryPressure False 2020-03-29 15:19:33 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-29 15:19:33 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-29 15:19:33 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-29 15:19:33 +0000 UTC 2020-03-29 13:27:35 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.40.0.2} {ExternalIP 34.82.124.78} {InternalDNS e2e-56589a6d6a-b49e0-master.c.k8s-jkns-gci-gce-1-3.internal} {Hostname e2e-56589a6d6a-b49e0-master.c.k8s-jkns-gci-gce-1-3.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:579325f333e650a5a1ff1e77ca2b682f,SystemUUID:579325F3-33E6-50A5-A1FF-1E77CA2B682F,BootID:92101c33-fc6e-4c8c-914c-9ded50c759c8,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.15.12-beta.0.9+8de4013f5815f7,KubeProxyVersion:v1.15.12-beta.0.9+8de4013f5815f7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd@sha256:02cd751eef4f7dcea7986e58d51903dab39baf4606f636b50891f30190abce2c k8s.gcr.io/etcd:3.3.10-1] 295923553} {[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[k8s.gcr.io/kube-apiserver:v1.15.12-beta.0.9_8de4013f5815f7] 247635661} {[k8s.gcr.io/kube-controller-manager:v1.15.12-beta.0.9_8de4013f5815f7] 198439763} {[k8s.gcr.io/kube-scheduler:v1.15.12-beta.0.9_8de4013f5815f7] 95032786} {[k8s.gcr.io/kube-addon-manager@sha256:3e315022a842d782a28e729720f21091dde21f1efea28868d65ec595ad871616 k8s.gcr.io/kube-addon-manager:v9.0.2] 83076028} {[k8s.gcr.io/etcd-empty-dir-cleanup@sha256:13e18f320022be5cf7c1c38a6207d02c07d603b52c1dd47b7c69e9324bd3c641 k8s.gcr.io/etcd-empty-dir-cleanup:3.3.10.1] 73958468} {[k8s.gcr.io/ingress-gce-glbc-amd64@sha256:14f14351a03038b238232e60850a9cfa0dffbed0590321ef84216a432accc1ca k8s.gcr.io/ingress-gce-glbc-amd64:v1.2.3] 71797285} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Mar 29 15:19:46.680: INFO: 
Logging kubelet events for node e2e-56589a6d6a-b49e0-master
Mar 29 15:19:46.717: INFO: 
Logging pods the kubelet thinks is on node e2e-56589a6d6a-b49e0-master
Mar 29 15:19:46.806: INFO: l7-lb-controller-v1.2.3-e2e-56589a6d6a-b49e0-master started at 2020-03-29 13:27:10 +0000 UTC (0+1 container statuses recorded)
Mar 29 15:19:46.806: INFO: 	Container l7-lb-controller ready: true, restart count 0
... skipping 18 lines ...
Mar 29 15:19:46.806: INFO: etcd-empty-dir-cleanup-e2e-56589a6d6a-b49e0-master started at 2020-03-29 13:26:48 +0000 UTC (0+1 container statuses recorded)
Mar 29 15:19:46.806: INFO: 	Container etcd-empty-dir-cleanup ready: true, restart count 0
Mar 29 15:19:46.963: INFO: 
Latency metrics for node e2e-56589a6d6a-b49e0-master
Mar 29 15:19:46.963: INFO: 
Logging node info for node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 15:19:47.001: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-56589a6d6a-b49e0-minion-group-gpdt,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-56589a6d6a-b49e0-minion-group-gpdt,UID:4336d0f3-4a2a-4987-9730-dae0e5f5ed7f,ResourceVersion:22269,Generation:0,CreationTimestamp:2020-03-29 13:27:34 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,cloud.google.com/metadata-proxy-ready: true,failure-domain.beta.kubernetes.io/region: us-west1,failure-domain.beta.kubernetes.io/zone: us-west1-b,kubernetes.io/arch: amd64,kubernetes.io/hostname: e2e-56589a6d6a-b49e0-minion-group-gpdt,kubernetes.io/os: linux,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.64.5.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-3/us-west1-b/e2e-56589a6d6a-b49e0-minion-group-gpdt,Unschedulable:false,Taints:[{node-under-test false NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7841861632 0} {<nil>} 7658068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7579717632 0} {<nil>} 7402068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{KernelDeadlock False 2020-03-29 15:19:24 +0000 UTC 2020-03-29 13:27:22 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-03-29 15:19:24 +0000 UTC 2020-03-29 13:27:22 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {FrequentUnregisterNetDevice False 2020-03-29 15:19:24 +0000 UTC 2020-03-29 13:32:24 +0000 UTC UnregisterNetDevice node is functioning properly} {FrequentKubeletRestart False 2020-03-29 15:19:24 +0000 UTC 2020-03-29 13:32:23 +0000 UTC FrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-03-29 15:19:24 +0000 UTC 2020-03-29 13:32:24 +0000 UTC FrequentDockerRestart docker is functioning properly} {FrequentContainerdRestart False 2020-03-29 15:19:24 +0000 UTC 2020-03-29 13:32:26 +0000 UTC FrequentContainerdRestart containerd is functioning properly} {CorruptDockerOverlay2 False 2020-03-29 15:19:24 +0000 UTC 2020-03-29 13:32:23 +0000 UTC CorruptDockerOverlay2 docker overlay2 is functioning properly} {NetworkUnavailable False 2020-03-29 13:27:36 +0000 UTC 2020-03-29 13:27:36 +0000 UTC RouteCreated NodeController create implicit route} {MemoryPressure False 2020-03-29 15:19:13 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-29 15:19:13 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-29 15:19:13 +0000 UTC 2020-03-29 13:27:34 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-29 15:19:13 +0000 UTC 2020-03-29 13:27:35 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.40.0.7} {ExternalIP 35.199.151.49} {InternalDNS e2e-56589a6d6a-b49e0-minion-group-gpdt.c.k8s-jkns-gci-gce-1-3.internal} {Hostname e2e-56589a6d6a-b49e0-minion-group-gpdt.c.k8s-jkns-gci-gce-1-3.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a2b8e2cf3123f7be41c73b0b8063bda3,SystemUUID:A2B8E2CF-3123-F7BE-41C7-3B0B8063BDA3,BootID:d500ec22-7c72-4e95-b3a7-bea66228a5a0,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.15.12-beta.0.9+8de4013f5815f7,KubeProxyVersion:v1.15.12-beta.0.9+8de4013f5815f7,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1] 121711221} {[k8s.gcr.io/kube-proxy:v1.15.12-beta.0.9_8de4013f5815f7] 95529390} {[k8s.gcr.io/fluentd-gcp-scaler@sha256:4f28f10fb89506768910b858f7a18ffb996824a16d70d5ac895e49687df9ff58 k8s.gcr.io/fluentd-gcp-scaler:0.5.2] 90498960} {[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1] 76016169} {[k8s.gcr.io/cluster-proportional-autoscaler-amd64@sha256:0abeb6a79ad5aec10e920110446a97fb75180da8680094acb6715de62507f4b0 k8s.gcr.io/cluster-proportional-autoscaler-amd64:1.6.0] 47668785} {[k8s.gcr.io/event-exporter@sha256:06acf489ab092b4fb49273e426549a52c0fcd1dbcb67e03d5935b5ee1a899c3e k8s.gcr.io/event-exporter:v0.2.5] 47261019} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/metrics-server-amd64@sha256:4ca116565ff6a46e582bada50ba3550f95b368db1d2415829241a565a6c38e2a k8s.gcr.io/metrics-server-amd64:v0.3.3] 39933796} {[k8s.gcr.io/addon-resizer@sha256:8075ed6db9baad249d9cf2656c0ecaad8d87133baf20286b1953dfb3fb06e75d k8s.gcr.io/addon-resizer:1.8.5] 35110823} {[nginx@sha256:0fd68ec4b64b8dbb2bef1f1a5de9d47b658afd3635dc9c45bf0cbeac46e72101 nginx:1.15-alpine] 16087791} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[k8s.gcr.io/defaultbackend-amd64@sha256:4dc5e07c8ca4e23bddb3153737d7b8c556e5fb2f29c4558b7cd6e6df99c512c7 k8s.gcr.io/defaultbackend-amd64:1.5] 5132544} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Mar 29 15:19:47.001: INFO: 
Logging kubelet events for node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 15:19:47.039: INFO: 
Logging pods the kubelet thinks is on node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 15:19:47.084: INFO: fluentd-gcp-scaler-6848d689fb-zx4mz started at 2020-03-29 13:27:36 +0000 UTC (0+1 container statuses recorded)
Mar 29 15:19:47.085: INFO: 	Container fluentd-gcp-scaler ready: true, restart count 0
... skipping 15 lines ...
Mar 29 15:19:47.085: INFO: kube-dns-autoscaler-6d6bc99fd8-gc6sv started at 2020-03-29 13:27:36 +0000 UTC (0+1 container statuses recorded)
Mar 29 15:19:47.085: INFO: 	Container autoscaler ready: true, restart count 0
Mar 29 15:19:47.223: INFO: 
Latency metrics for node e2e-56589a6d6a-b49e0-minion-group-gpdt
Mar 29 15:19:47.223: INFO: 
Logging node info for node e2e-56589a6d6a-b49e0-minion-group-r0j4
Mar 29 15:19:47.261: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-56589a6d6a-b49e0-minion-group-r0j4,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/e2e-56589a6d6a-b49e0-minion-group-r0j4,UID:ee79214f-2b5e-4675-a36f-58d66a766ae3,ResourceVersion:22313,Generation:0,CreationTimestamp:2020-03-29 13:27:38 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/instance-type: n1-standard-2,beta.kubernetes.io/metadata-proxy-ready: true,beta.kubernetes.io/os: linux,cloud.google.com/metadata-proxy-ready: true,failure-domain.beta.kubernetes.io/region: us-west1,failure-domain.beta.kubernetes.io/zone: us-west1-b,kubernetes.io/arch: amd64,kubernetes.io/hostname: e2e-56589a6d6a-b49e0-minion-group-r0j4,kubernetes.io/os: linux,},Annotations:map[string]string{node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.64.4.0/24,DoNotUse_ExternalID:,ProviderID:gce://k8s-jkns-gci-gce-1-3/us-west1-b/e2e-56589a6d6a-b49e0-minion-group-r0j4,Unschedulable:false,Taints:[{node-under-test false NoSchedule <nil>}],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101241290752 0} {<nil>}  BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7841861632 0} {<nil>} 7658068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91117161526 0} {<nil>} 91117161526 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7579717632 0} {<nil>} 7402068Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[{FrequentContainerdRestart False 2020-03-29 15:19:32 +0000 UTC 2020-03-29 13:32:29 +0000 UTC FrequentContainerdRestart containerd is functioning properly} {FrequentUnregisterNetDevice False 2020-03-29 15:19:32 +0000 UTC 2020-03-29 13:32:27 +0000 UTC UnregisterNetDevice node is functioning properly} {CorruptDockerOverlay2 False 2020-03-29 15:19:32 +0000 UTC 2020-03-29 13:32:27 +0000 UTC CorruptDockerOverlay2 docker overlay2 is functioning properly} {KernelDeadlock False 2020-03-29 15:19:32 +0000 UTC 2020-03-29 13:27:26 +0000 UTC KernelHasNoDeadlock kernel has no deadlock} {ReadonlyFilesystem False 2020-03-29 15:19:32 +0000 UTC 2020-03-29 13:27:26 +0000 UTC FilesystemIsNotReadOnly Filesystem is not read-only} {FrequentKubeletRestart False 2020-03-29 15:19:32 +0000 UTC 2020-03-29 13:32:27 +0000 UTC FrequentKubeletRestart kubelet is functioning properly} {FrequentDockerRestart False 2020-03-29 15:19:32 +0000 UTC 2020-03-29 13:32:28 +0000 UTC FrequentDockerRestart docker is functioning properly} {NetworkUnavailable False 2020-03-29 13:27:38 +0000 UTC 2020-03-29 13:27:38 +0000 UTC RouteCreated NodeController create implicit route} {MemoryPressure False 2020-03-29 15:19:17 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-03-29 15:19:17 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-03-29 15:19:17 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-03-29 15:19:17 +0000 UTC 2020-03-29 13:27:38 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.40.0.6} {ExternalIP 34.82.133.227} {InternalDNS e2e-56589a6d6a-b49e0-minion-group-r0j4.c.k8s-jkns-gci-gce-1-3.internal} {Hostname e2e-56589a6d6a-b49e0-minion-group-r0j4.c.k8s-jkns-gci-gce-1-3.internal}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f3e8990f7574f970185a10b5e4a640ce,SystemUUID:F3E8990F-7574-F970-185A-10B5E4A640CE,BootID:a00e4867-5ea0-49d4-800f-3cb47e454179,KernelVersion:4.14.94+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:docker://18.9.3,KubeletVersion:v1.15.12-beta.0.9+8de4013f5815f7,KubeProxyVersion:v1.15.12-beta.0.9+8de4013f5815f7,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/stackdriver-agents/stackdriver-logging-agent@sha256:6c8574a40816676cd908cfa89d16463002b56ca05fa76d0c912e116bc0ab867e gcr.io/stackdriver-agents/stackdriver-logging-agent:1.6.8] 264721247} {[k8s.gcr.io/kube-proxy:v1.15.12-beta.0.9_8de4013f5815f7] 95529390} {[k8s.gcr.io/heapster-amd64@sha256:9fae0af136ce0cf4f88393b3670f7139ffc464692060c374d2ae748e13144521 k8s.gcr.io/heapster-amd64:v1.6.0-beta.1] 76016169} {[k8s.gcr.io/prometheus-to-sd@sha256:14666989f40bb7c896c3e775a93c6873e2b791d65bc65579f58a078b7f9a764e k8s.gcr.io/prometheus-to-sd:v0.5.0] 41861013} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[k8s.gcr.io/metrics-server-amd64@sha256:4ca116565ff6a46e582bada50ba3550f95b368db1d2415829241a565a6c38e2a k8s.gcr.io/metrics-server-amd64:v0.3.3] 39933796} {[k8s.gcr.io/addon-resizer@sha256:8075ed6db9baad249d9cf2656c0ecaad8d87133baf20286b1953dfb3fb06e75d k8s.gcr.io/addon-resizer:1.8.5] 35110823} {[k8s.gcr.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a k8s.gcr.io/metadata-proxy:v0.1.12] 11337839} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},}
Mar 29 15:19:47.261: INFO: 
Logging kubelet events for node e2e-56589a6d6a-b49e0-minion-group-r0j4
Mar 29 15:19:47.299: INFO: 
Logging pods the kubelet thinks is on node e2e-56589a6d6a-b49e0-minion-group-r0j4
Mar 29 15:19:47.345: INFO: coredns-557dcdc9f5-hh2mn started at 2020-03-29 13:27:38 +0000 UTC (0+1 container statuses recorded)
Mar 29 15:19:47.345: INFO: 	Container coredns ready: true, restart count 0
... skipping 80 lines ...
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance] [It]
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697

    Mar 29 15:06:24.249: Failed waiting for state update: timed out waiting for the condition

    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/statefulset_utils.go:338
------------------------------
SSSSSSSSSSSSS
------------------------------
[BeforeEach] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
... skipping 8 lines ...
[sig-storage] In-tree Volumes
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  [Driver: hostPath]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/in_tree_volumes.go:66
    [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath
    /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:91
      should fail if subpath with backstepping is outside the volume [Slow] [BeforeEach]
      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/subpath.go:254

      Driver hostPath doesn't support DynamicPV -- skipping

      /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:142
------------------------------
... skipping 26 lines ...
[BeforeEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Mar 29 15:20:18.438: INFO: >>> kubeConfig: /workspace/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-4953395e-f788-4174-b50e-05f391b7b833
[AfterEach] [sig-node] ConfigMap
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Mar 29 15:20:18.634: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready
Mar 29 15:20:18.675: INFO: Condition Ready of node e2e-56589a6d6a-b49e0-windows-node-group-cg7s is true, but Node is tainted by NodeController with [{node.kubernetes.io/not-ready  NoExecute 2020-03-29 15:19:19 +0000 UTC}]. Failure
... skipping 2 lines ...
Mar 29 15:20:26.254: INFO: namespace configmap-3391 deletion completed in 7.579037263s


• [SLOW TEST:7.816 seconds]
[sig-node] ConfigMap
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[BeforeEach] [sig-apps] ReplicationController
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Mar 29 15:20:26.255: INFO: >>> kubeConfig: /workspace/.kube/config
... skipping 245 lines ...
• [SLOW TEST:104.918 seconds]
[sig-storage] Projected secret
/go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS{"component":"entrypoint","file":"prow/entrypoint/run.go:164","func":"k8s.io/test-infra/prow/entrypoint.Options.ExecuteProcess","level":"error","msg":"Process did not finish before 2h0m0s timeout","time":"2020-03-29T15:22:04Z"}
2020/03/29 15:22:05 process.go:199: Interrupt after 2h0m0s timeout during /home/prow/go/src/k8s.io/windows-testing/gce/run-e2e.sh --ginkgo.focus=\[Conformance\]|\[NodeConformance\]|\[sig-windows\] --ginkgo.skip=\[LinuxOnly\]|\[Serial\]|\[Feature:.+\] --minStartupPods=8 --node-os-distro=windows. Will terminate in another 15m

------------------------------
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/testsuites/base.go:92
[BeforeEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
... skipping 81 lines ...
Mar 29 15:21:33.491: INFO: Running '/home/prow/go/src/k8s.io/windows-testing/kubernetes/platforms/linux/amd64/kubectl --server=https://34.82.124.78 --kubeconfig=/workspace/.kube/config exec gcepd-client --namespace=volume-3234 -- powershell /c type /opt/0/index.html'
Mar 29 15:21:44.112: INFO: stderr: ""
Mar 29 15:21:44.112: INFO: stdout: "Hello from gcepd from namespace volume-3234\r\n"
STEP: cleaning the environment after gcepd
Mar 29 15:21:44.113: INFO: Deleting pod "gcepd-client" in namespace "volume-3234"
Mar 29 15:21:44.155: INFO: Wait up to 5m0s for pod "gcepd-client" to be fully deleted
Mar 29 15:21:49.538: INFO: error deleting PD "e2e-56589a6d6a-b49e0-a6d0b73a-fe82-4b37-b63c-96aeb41ef435": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-a6d0b73a-fe82-4b37-b63c-96aeb41ef435' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-159w', resourceInUseByAnotherResource
Mar 29 15:21:49.538: INFO: Couldn't delete PD "e2e-56589a6d6a-b49e0-a6d0b73a-fe82-4b37-b63c-96aeb41ef435", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-a6d0b73a-fe82-4b37-b63c-96aeb41ef435' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-159w', resourceInUseByAnotherResource
Mar 29 15:21:56.364: INFO: error deleting PD "e2e-56589a6d6a-b49e0-a6d0b73a-fe82-4b37-b63c-96aeb41ef435": googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-a6d0b73a-fe82-4b37-b63c-96aeb41ef435' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-159w', resourceInUseByAnotherResource
Mar 29 15:21:56.364: INFO: Couldn't delete PD "e2e-56589a6d6a-b49e0-a6d0b73a-fe82-4b37-b63c-96aeb41ef435", sleeping 5s: googleapi: Error 400: The disk resource 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/disks/e2e-56589a6d6a-b49e0-a6d0b73a-fe82-4b37-b63c-96aeb41ef435' is already being used by 'projects/k8s-jkns-gci-gce-1-3/zones/us-west1-b/instances/e2e-56589a6d6a-b49e0-windows-node-group-159w', resourceInUseByAnotherResource
Mar 29 15:22:03.542: INFO: Successfully deleted PD "e2e-56589a6d6a-b49e0-a6d0b73a-fe82-4b37-b63c-96aeb41ef435".
Mar 29 15:22:03.542: INFO: In-tree plugin kubernetes.io/gce-pd is not migrated, not validating any metrics
[AfterEach] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes
  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Mar 29 15:22:03.543: INFO: Waiting up to 3m0s for all (but 3) nodes to be ready
STEP: Destroying namespace "volume-3234" for this suite.
... skipping 131 lines ...

  /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:335
------------------------------
Mar 29 15:22:13.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Mar 29 15:22:15.248: INFO: namespace gc-2471 deletion completed in 9.625478039s

{"component":"entrypoint","file":"prow/entrypoint/run.go:245","func":"k8s.io/test-infra/prow/entrypoint.gracefullyTerminate","level":"error","msg":"Process did not exit before 15s grace period","time":"2020-03-29T15:22:19Z"}