This job view page is being replaced by Spyglass soon. Check out the new job view.
PRcarlory: Create e2e daemonset list and deletecollection test
ResultFAILURE
Tests 0 failed / 0 succeeded
Started2021-03-30 05:36
Elapsed2h5m
Revision
Builderd62ff83e-9119-11eb-8cc7-52a8353a7f08
infra-commitc3460250b
repok8s.io/test-infra
repo-commitc3460250be15b780dba7513506fc4b1fb706ed3f
repos{u'k8s.io/kubernetes': u'master:6572fe4d90173a1e97fda3e0cc28acc95e3f560a,100603:69430c0ffc02bf3640f26105188b77d7cc4cd241', u'k8s.io/test-infra': u'master'}

No Test Failures!


Error lines from build-log.txt

... skipping 204 lines ...
W0330 05:46:55.823] localAPIEndpoint:
W0330 05:46:55.823]   advertiseAddress: 172.18.0.2
W0330 05:46:55.823]   bindPort: 6443
W0330 05:46:55.823] nodeRegistration:
W0330 05:46:55.824]   criSocket: unix:///run/containerd/containerd.sock
W0330 05:46:55.830]   kubeletExtraArgs:
W0330 05:46:55.831]     fail-swap-on: "false"
W0330 05:46:55.831]     node-ip: 172.18.0.2
W0330 05:46:55.831]     node-labels: ""
W0330 05:46:55.831]     provider-id: kind://docker/kind/kind-worker2
W0330 05:46:55.831] ---
W0330 05:46:55.831] apiVersion: kubeadm.k8s.io/v1beta2
W0330 05:46:55.832] discovery:
... skipping 2 lines ...
W0330 05:46:55.832]     token: abcdef.0123456789abcdef
W0330 05:46:55.832]     unsafeSkipCAVerification: true
W0330 05:46:55.833] kind: JoinConfiguration
W0330 05:46:55.833] nodeRegistration:
W0330 05:46:55.833]   criSocket: unix:///run/containerd/containerd.sock
W0330 05:46:55.833]   kubeletExtraArgs:
W0330 05:46:55.833]     fail-swap-on: "false"
W0330 05:46:55.833]     node-ip: 172.18.0.2
W0330 05:46:55.833]     node-labels: ""
W0330 05:46:55.834]     provider-id: kind://docker/kind/kind-worker2
W0330 05:46:55.834] ---
W0330 05:46:55.834] apiVersion: kubelet.config.k8s.io/v1beta1
W0330 05:46:55.834] cgroupDriver: cgroupfs
... skipping 37 lines ...
W0330 05:46:55.838] localAPIEndpoint:
W0330 05:46:55.838]   advertiseAddress: 172.18.0.4
W0330 05:46:55.838]   bindPort: 6443
W0330 05:46:55.838] nodeRegistration:
W0330 05:46:55.838]   criSocket: unix:///run/containerd/containerd.sock
W0330 05:46:55.839]   kubeletExtraArgs:
W0330 05:46:55.839]     fail-swap-on: "false"
W0330 05:46:55.839]     node-ip: 172.18.0.4
W0330 05:46:55.839]     node-labels: ""
W0330 05:46:55.839]     provider-id: kind://docker/kind/kind-worker
W0330 05:46:55.839] ---
W0330 05:46:55.839] apiVersion: kubeadm.k8s.io/v1beta2
W0330 05:46:55.839] discovery:
... skipping 2 lines ...
W0330 05:46:55.840]     token: abcdef.0123456789abcdef
W0330 05:46:55.840]     unsafeSkipCAVerification: true
W0330 05:46:55.840] kind: JoinConfiguration
W0330 05:46:55.840] nodeRegistration:
W0330 05:46:55.840]   criSocket: unix:///run/containerd/containerd.sock
W0330 05:46:55.840]   kubeletExtraArgs:
W0330 05:46:55.840]     fail-swap-on: "false"
W0330 05:46:55.841]     node-ip: 172.18.0.4
W0330 05:46:55.841]     node-labels: ""
W0330 05:46:55.841]     provider-id: kind://docker/kind/kind-worker
W0330 05:46:55.841] ---
W0330 05:46:55.841] apiVersion: kubelet.config.k8s.io/v1beta1
W0330 05:46:55.841] cgroupDriver: cgroupfs
... skipping 37 lines ...
W0330 05:46:55.844] localAPIEndpoint:
W0330 05:46:55.844]   advertiseAddress: 172.18.0.3
W0330 05:46:55.844]   bindPort: 6443
W0330 05:46:55.844] nodeRegistration:
W0330 05:46:55.844]   criSocket: unix:///run/containerd/containerd.sock
W0330 05:46:55.844]   kubeletExtraArgs:
W0330 05:46:55.845]     fail-swap-on: "false"
W0330 05:46:55.845]     node-ip: 172.18.0.3
W0330 05:46:55.846]     node-labels: ""
W0330 05:46:55.846]     provider-id: kind://docker/kind/kind-control-plane
W0330 05:46:55.846] ---
W0330 05:46:55.846] apiVersion: kubeadm.k8s.io/v1beta2
W0330 05:46:55.846] controlPlane:
... skipping 6 lines ...
W0330 05:46:55.847]     token: abcdef.0123456789abcdef
W0330 05:46:55.847]     unsafeSkipCAVerification: true
W0330 05:46:55.847] kind: JoinConfiguration
W0330 05:46:55.847] nodeRegistration:
W0330 05:46:55.847]   criSocket: unix:///run/containerd/containerd.sock
W0330 05:46:55.847]   kubeletExtraArgs:
W0330 05:46:55.847]     fail-swap-on: "false"
W0330 05:46:55.847]     node-ip: 172.18.0.3
W0330 05:46:55.848]     node-labels: ""
W0330 05:46:55.848]     provider-id: kind://docker/kind/kind-control-plane
W0330 05:46:55.848] ---
W0330 05:46:55.848] apiVersion: kubelet.config.k8s.io/v1beta1
W0330 05:46:55.848] cgroupDriver: cgroupfs
... skipping 216 lines ...
W0330 05:48:19.139] I0330 05:48:09.085140     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
W0330 05:48:19.139] I0330 05:48:09.585343     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
W0330 05:48:19.139] I0330 05:48:10.085566     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
W0330 05:48:19.140] I0330 05:48:10.584792     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
W0330 05:48:19.140] I0330 05:48:11.084916     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
W0330 05:48:19.140] I0330 05:48:11.584695     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s  in 0 milliseconds
W0330 05:48:19.140] I0330 05:48:15.962173     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 3877 milliseconds
W0330 05:48:19.141] I0330 05:48:16.087278     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
W0330 05:48:19.141] I0330 05:48:16.587042     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 2 milliseconds
W0330 05:48:19.141] I0330 05:48:17.086716     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s 500 Internal Server Error in 1 milliseconds
W0330 05:48:19.141] [apiclient] All control plane components are healthy after 75.508210 seconds
W0330 05:48:19.141] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
W0330 05:48:19.142] I0330 05:48:17.586180     205 round_trippers.go:454] GET https://kind-control-plane:6443/healthz?timeout=10s 200 OK in 1 milliseconds
W0330 05:48:19.142] I0330 05:48:17.586295     205 uploadconfig.go:108] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
W0330 05:48:19.142] I0330 05:48:17.590737     205 round_trippers.go:454] POST https://kind-control-plane:6443/api/v1/namespaces/kube-system/configmaps?timeout=10s 201 Created in 2 milliseconds
W0330 05:48:19.142] I0330 05:48:17.594828     205 round_trippers.go:454] POST https://kind-control-plane:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/roles?timeout=10s 201 Created in 3 milliseconds
... skipping 1627 lines ...
I0330 07:40:52.228] [07:40:52] Pod status is: Running
I0330 07:40:57.339] [07:40:57] Pod status is: Running
I0330 07:41:02.452] [07:41:02] Pod status is: Running
I0330 07:41:07.571] [07:41:07] Pod status is: Running
I0330 07:41:12.685] [07:41:12] Pod status is: Running
I0330 07:41:17.829] [07:41:17] Pod status is: Running
I0330 07:41:22.960] [07:41:22] Pod status is: Failed
I0330 07:41:22.960] [07:41:22] Failed.
I0330 07:41:23.118] Name:         e2e-conformance-test
I0330 07:41:23.118] Namespace:    conformance
I0330 07:41:23.118] Priority:     0
I0330 07:41:23.118] Node:         kind-worker/172.18.0.4
I0330 07:41:23.118] Start Time:   Tue, 30 Mar 2021 05:55:26 +0000
I0330 07:41:23.119] Labels:       <none>
I0330 07:41:23.119] Annotations:  <none>
I0330 07:41:23.119] Status:       Failed
I0330 07:41:23.119] IP:           10.244.1.2
I0330 07:41:23.119] IPs:
I0330 07:41:23.119]   IP:  10.244.1.2
I0330 07:41:23.119] Containers:
I0330 07:41:23.119]   conformance-container:
I0330 07:41:23.119]     Container ID:   containerd://896c9141bf83e7762093437780a8663f6f1e433c7ef648c62c0ed8ff885a1c21
I0330 07:41:23.120]     Image:          k8s.gcr.io/conformance-amd64:v1.22.0-alpha.0.18
I0330 07:41:23.120]     Image ID:       sha256:5014605d79ffe10c9866961346abdf555a224b471f709604e0a589a21c19a734
I0330 07:41:23.120]     Port:           <none>
I0330 07:41:23.120]     Host Port:      <none>
I0330 07:41:23.120]     State:          Terminated
I0330 07:41:23.120]       Reason:       Error
I0330 07:41:23.120]       Exit Code:    1
I0330 07:41:23.120]       Started:      Tue, 30 Mar 2021 05:55:30 +0000
I0330 07:41:23.120]       Finished:     Tue, 30 Mar 2021 07:41:17 +0000
I0330 07:41:23.120]     Ready:          False
I0330 07:41:23.120]     Restart Count:  0
I0330 07:41:23.121]     Environment:
... skipping 29 lines ...
I0330 07:41:23.123] Events:                      <none>
I0330 07:41:23.248] + /usr/local/bin/ginkgo '--focus=\[Conformance\]' --skip= --noColor=true /usr/local/bin/e2e.test -- --disable-log-dump --repo-root=/kubernetes --provider=skeleton --report-dir=/tmp/results --kubeconfig= -v=4
I0330 07:41:23.249] ++ tee /tmp/results/e2e.log
I0330 07:41:23.249] I0330 05:55:31.736777      19 test_context.go:440] Using a temporary kubeconfig file from in-cluster config : /tmp/kubeconfig-963336331
I0330 07:41:23.249] I0330 05:55:31.736819      19 test_context.go:461] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready
I0330 07:41:23.249] I0330 05:55:31.736960      19 e2e.go:129] Starting e2e run "960868dd-7445-4c5d-9f79-84cfc4032f64" on Ginkgo node 1
I0330 07:41:23.250] {"msg":"Test Suite starting","total":340,"completed":0,"skipped":0,"failed":0}
I0330 07:41:23.250] Running Suite: Kubernetes e2e suite
I0330 07:41:23.250] ===================================
I0330 07:41:23.250] Random Seed: 1617083730 - Will randomize all specs
I0330 07:41:23.251] Will run 340 of 5772 specs
I0330 07:41:23.251] 
I0330 07:41:23.251] Mar 30 05:55:31.752: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
... skipping 86 lines ...
I0330 07:41:23.268] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0330 07:41:23.269]   Kubectl logs
I0330 07:41:23.269]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383
I0330 07:41:23.269]     should be able to retrieve and filter logs  [Conformance]
I0330 07:41:23.269]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.269] ------------------------------
I0330 07:41:23.270] {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":340,"completed":1,"skipped":22,"failed":0}
I0330 07:41:23.270] SSSSSSSSSSS
I0330 07:41:23.270] ------------------------------
I0330 07:41:23.270] [sig-storage] Downward API volume 
I0330 07:41:23.270]   should provide container's cpu request [NodeConformance] [Conformance]
I0330 07:41:23.270]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.271] [BeforeEach] [sig-storage] Downward API volume
... skipping 10 lines ...
I0330 07:41:23.273] [BeforeEach] [sig-storage] Downward API volume
I0330 07:41:23.274]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
I0330 07:41:23.274] I0330 05:55:53.558693      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.274] [It] should provide container's cpu request [NodeConformance] [Conformance]
I0330 07:41:23.274]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.275] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:23.275] Mar 30 05:55:53.564: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd33d5f7-b7ca-4940-ab27-91ef2d3be7da" in namespace "downward-api-5227" to be "Succeeded or Failed"
I0330 07:41:23.275] Mar 30 05:55:53.567: INFO: Pod "downwardapi-volume-dd33d5f7-b7ca-4940-ab27-91ef2d3be7da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.358991ms
I0330 07:41:23.275] Mar 30 05:55:55.573: INFO: Pod "downwardapi-volume-dd33d5f7-b7ca-4940-ab27-91ef2d3be7da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008445494s
I0330 07:41:23.276] STEP: Saw pod success
I0330 07:41:23.276] Mar 30 05:55:55.573: INFO: Pod "downwardapi-volume-dd33d5f7-b7ca-4940-ab27-91ef2d3be7da" satisfied condition "Succeeded or Failed"
I0330 07:41:23.276] Mar 30 05:55:55.575: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-dd33d5f7-b7ca-4940-ab27-91ef2d3be7da container client-container: <nil>
I0330 07:41:23.276] STEP: delete the pod
I0330 07:41:23.276] Mar 30 05:55:55.587: INFO: Waiting for pod downwardapi-volume-dd33d5f7-b7ca-4940-ab27-91ef2d3be7da to disappear
I0330 07:41:23.277] Mar 30 05:55:55.590: INFO: Pod downwardapi-volume-dd33d5f7-b7ca-4940-ab27-91ef2d3be7da no longer exists
I0330 07:41:23.277] [AfterEach] [sig-storage] Downward API volume
I0330 07:41:23.277]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.277] Mar 30 05:55:55.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.277] STEP: Destroying namespace "downward-api-5227" for this suite.
I0330 07:41:23.278] •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":340,"completed":2,"skipped":33,"failed":0}
I0330 07:41:23.278] SSSSSSSSSSS
I0330 07:41:23.278] ------------------------------
I0330 07:41:23.278] [sig-node] InitContainer [NodeConformance] 
I0330 07:41:23.278]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0330 07:41:23.279]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.279] [BeforeEach] [sig-node] InitContainer [NodeConformance]
I0330 07:41:23.279]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:23.279] STEP: Creating a kubernetes client
I0330 07:41:23.279] Mar 30 05:55:55.596: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:23.279] STEP: Building a namespace api object, basename init-container
... skipping 3 lines ...
I0330 07:41:23.280] I0330 05:55:55.624000      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.281] I0330 05:55:55.624177      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.281] I0330 05:55:55.624203      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.281] [BeforeEach] [sig-node] InitContainer [NodeConformance]
I0330 07:41:23.281]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
I0330 07:41:23.282] I0330 05:55:55.626914      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.282] [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0330 07:41:23.282]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.282] STEP: creating the pod
I0330 07:41:23.282] Mar 30 05:55:55.627: INFO: PodSpec: initContainers in spec.initContainers
I0330 07:41:23.283] I0330 05:55:55.633041      19 retrywatcher.go:247] Starting RetryWatcher.
I0330 07:41:23.292] Mar 30 05:56:42.659: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-853e7a67-87c7-4ce0-a83b-236b32b10e56", GenerateName:"", Namespace:"init-container-2699", SelfLink:"", UID:"d7d5d317-684c-41af-9e92-07b7174742f8", ResourceVersion:"1632", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752680555, loc:(*time.Location)(0x99a8640)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"627001319"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0033e7608), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0033e7620)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc0033e7638), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0033e7650)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-api-access-zqklt", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(0xc00362e340), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-zqklt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-zqklt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.4.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-api-access-zqklt", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003445510), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kind-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000c5e930), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0034455a0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0034455c0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0034455c8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0034455cc), PreemptionPolicy:(*v1.PreemptionPolicy)(0xc0034612f0), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752680555, loc:(*time.Location)(0x99a8640)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752680555, loc:(*time.Location)(0x99a8640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752680555, loc:(*time.Location)(0x99a8640)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752680555, loc:(*time.Location)(0x99a8640)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.2", PodIP:"10.244.2.4", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.4"}}, StartTime:(*v1.Time)(0xc0033e7680), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000c5ea10)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000c5ea80)}, Ready:false, RestartCount:3, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"k8s.gcr.io/e2e-test-images/busybox@sha256:39e1e963e5310e9c313bad51523be012ede7b35bb9316517d19089a010356592", ContainerID:"containerd://81e2eebe44926911221c918c491a4e89119f001281ba57bc5fe794d0b8a749b4", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00362e3c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/e2e-test-images/busybox:1.29-1", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00362e3a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.4.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00344564f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
I0330 07:41:23.292] [AfterEach] [sig-node] InitContainer [NodeConformance]
I0330 07:41:23.292]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.293] Mar 30 05:56:42.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.293] I0330 05:56:42.660537      19 retrywatcher.go:147] "Stopping RetryWatcher."
I0330 07:41:23.293] I0330 05:56:42.660926      19 retrywatcher.go:275] Stopping RetryWatcher.
I0330 07:41:23.294] STEP: Destroying namespace "init-container-2699" for this suite.
I0330 07:41:23.294] 
I0330 07:41:23.294] • [SLOW TEST:47.076 seconds]
I0330 07:41:23.294] [sig-node] InitContainer [NodeConformance]
I0330 07:41:23.294] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:23.295]   should not start app containers if init containers fail on a RestartAlways pod [Conformance]
I0330 07:41:23.295]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.295] ------------------------------
I0330 07:41:23.296] {"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":340,"completed":3,"skipped":44,"failed":0}
I0330 07:41:23.296] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:23.296]   should unconditionally reject operations on fail closed webhook [Conformance]
I0330 07:41:23.296]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.297] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:23.297]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:23.297] STEP: Creating a kubernetes client
I0330 07:41:23.297] Mar 30 05:56:42.672: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:23.297] STEP: Building a namespace api object, basename webhook
... skipping 12 lines ...
I0330 07:41:23.300] STEP: Wait for the deployment to be ready
I0330 07:41:23.300] Mar 30 05:56:43.727: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
I0330 07:41:23.301] Mar 30 05:56:45.740: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752680603, loc:(*time.Location)(0x99a8640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752680603, loc:(*time.Location)(0x99a8640)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63752680603, loc:(*time.Location)(0x99a8640)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63752680603, loc:(*time.Location)(0x99a8640)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-76b6968d\" is progressing."}}, CollisionCount:(*int32)(nil)}
I0330 07:41:23.302] STEP: Deploying the webhook service
I0330 07:41:23.302] STEP: Verifying the service has paired with the endpoint
I0330 07:41:23.302] Mar 30 05:56:48.754: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
I0330 07:41:23.302] [It] should unconditionally reject operations on fail closed webhook [Conformance]
I0330 07:41:23.303]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.303] STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
I0330 07:41:23.303] STEP: create a namespace for the webhook
I0330 07:41:23.303] STEP: create a configmap should be unconditionally rejected by the webhook
I0330 07:41:23.303] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:23.304]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.304] Mar 30 05:56:48.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.304] STEP: Destroying namespace "webhook-2472" for this suite.
I0330 07:41:23.304] STEP: Destroying namespace "webhook-2472-markers" for this suite.
I0330 07:41:23.304] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:23.304]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0330 07:41:23.304] 
I0330 07:41:23.305] • [SLOW TEST:6.201 seconds]
I0330 07:41:23.305] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:23.305] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:23.305]   should unconditionally reject operations on fail closed webhook [Conformance]
I0330 07:41:23.305]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.305] ------------------------------
I0330 07:41:23.305] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":340,"completed":4,"skipped":44,"failed":0}
I0330 07:41:23.305] SSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.306] ------------------------------
I0330 07:41:23.306] [sig-node] PodTemplates 
I0330 07:41:23.306]   should delete a collection of pod templates [Conformance]
I0330 07:41:23.306]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.306] [BeforeEach] [sig-node] PodTemplates
... skipping 20 lines ...
I0330 07:41:23.309] STEP: check that the list of pod templates matches the requested quantity
I0330 07:41:23.309] Mar 30 05:56:48.940: INFO: requesting list of pod templates to confirm quantity
I0330 07:41:23.310] [AfterEach] [sig-node] PodTemplates
I0330 07:41:23.310]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.310] Mar 30 05:56:48.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.310] STEP: Destroying namespace "podtemplate-3132" for this suite.
I0330 07:41:23.310] •{"msg":"PASSED [sig-node] PodTemplates should delete a collection of pod templates [Conformance]","total":340,"completed":5,"skipped":67,"failed":0}
I0330 07:41:23.310] SS
I0330 07:41:23.310] ------------------------------
I0330 07:41:23.310] [sig-node] Pods 
I0330 07:41:23.311]   should support remote command execution over websockets [NodeConformance] [Conformance]
I0330 07:41:23.311]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.311] [BeforeEach] [sig-node] Pods
... skipping 18 lines ...
I0330 07:41:23.316] Mar 30 05:56:48.987: INFO: The status of Pod pod-exec-websocket-a4bd9198-a267-4ea6-9191-b51ec709ffe5 is Pending, waiting for it to be Running (with Ready = true)
I0330 07:41:23.316] Mar 30 05:56:50.991: INFO: The status of Pod pod-exec-websocket-a4bd9198-a267-4ea6-9191-b51ec709ffe5 is Running (Ready = true)
I0330 07:41:23.316] [AfterEach] [sig-node] Pods
I0330 07:41:23.316]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.317] Mar 30 05:56:51.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.317] STEP: Destroying namespace "pods-7359" for this suite.
I0330 07:41:23.317] •{"msg":"PASSED [sig-node] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":340,"completed":6,"skipped":69,"failed":0}
I0330 07:41:23.317] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.317] ------------------------------
I0330 07:41:23.317] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:23.317]   patching/updating a validating webhook should work [Conformance]
I0330 07:41:23.317]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.318] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 30 lines ...
I0330 07:41:23.322]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.322] Mar 30 05:56:54.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.322] STEP: Destroying namespace "webhook-5013" for this suite.
I0330 07:41:23.322] STEP: Destroying namespace "webhook-5013-markers" for this suite.
I0330 07:41:23.323] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:23.323]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0330 07:41:23.323] •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":340,"completed":7,"skipped":142,"failed":0}
I0330 07:41:23.323] SSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.323] ------------------------------
I0330 07:41:23.323] [sig-auth] ServiceAccounts 
I0330 07:41:23.323]   should allow opting out of API token automount  [Conformance]
I0330 07:41:23.323]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.324] [BeforeEach] [sig-auth] ServiceAccounts
... skipping 30 lines ...
I0330 07:41:23.328] Mar 30 05:56:55.410: INFO: created pod pod-service-account-nomountsa-nomountspec
I0330 07:41:23.328] Mar 30 05:56:55.410: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
I0330 07:41:23.328] [AfterEach] [sig-auth] ServiceAccounts
I0330 07:41:23.328]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.328] Mar 30 05:56:55.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.328] STEP: Destroying namespace "svcaccounts-6745" for this suite.
I0330 07:41:23.329] •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":340,"completed":8,"skipped":164,"failed":0}
I0330 07:41:23.329] SSSSSSSSSS
I0330 07:41:23.329] ------------------------------
I0330 07:41:23.329] [sig-node] Variable Expansion 
I0330 07:41:23.329]   should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
I0330 07:41:23.330]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.330] [BeforeEach] [sig-node] Variable Expansion
I0330 07:41:23.330]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:23.330] STEP: Creating a kubernetes client
I0330 07:41:23.330] Mar 30 05:56:55.428: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:23.330] STEP: Building a namespace api object, basename var-expansion
I0330 07:41:23.331] I0330 05:56:55.439853      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.331] I0330 05:56:55.439898      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.331] STEP: Waiting for a default service account to be provisioned in namespace
I0330 07:41:23.331] I0330 05:56:55.456193      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.332] I0330 05:56:55.456451      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.332] I0330 05:56:55.456477      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.332] I0330 05:56:55.459470      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.332] [It] should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
I0330 07:41:23.332]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.333] Mar 30 05:57:01.475: INFO: Deleting pod "var-expansion-6c8f2d23-1907-41b2-82a5-a5a62fa6ef75" in namespace "var-expansion-7883"
I0330 07:41:23.333] Mar 30 05:57:01.479: INFO: Wait up to 5m0s for pod "var-expansion-6c8f2d23-1907-41b2-82a5-a5a62fa6ef75" to be fully deleted
I0330 07:41:23.333] [AfterEach] [sig-node] Variable Expansion
I0330 07:41:23.333]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.333] Mar 30 05:57:11.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.333] STEP: Destroying namespace "var-expansion-7883" for this suite.
I0330 07:41:23.333] 
I0330 07:41:23.333] • [SLOW TEST:16.069 seconds]
I0330 07:41:23.333] [sig-node] Variable Expansion
I0330 07:41:23.334] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:23.334]   should fail substituting values in a volume subpath with backticks [Slow] [Conformance]
I0330 07:41:23.334]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.334] ------------------------------
I0330 07:41:23.334] {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with backticks [Slow] [Conformance]","total":340,"completed":9,"skipped":174,"failed":0}
I0330 07:41:23.334] SSSS
I0330 07:41:23.334] ------------------------------
I0330 07:41:23.335] [sig-node] Probing container 
I0330 07:41:23.335]   should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
I0330 07:41:23.335]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.335] [BeforeEach] [sig-node] Probing container
... skipping 26 lines ...
I0330 07:41:23.338] • [SLOW TEST:52.210 seconds]
I0330 07:41:23.339] [sig-node] Probing container
I0330 07:41:23.339] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:23.339]   should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
I0330 07:41:23.339]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.339] ------------------------------
I0330 07:41:23.339] {"msg":"PASSED [sig-node] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":340,"completed":10,"skipped":178,"failed":0}
I0330 07:41:23.339] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.340] ------------------------------
I0330 07:41:23.340] [sig-network] EndpointSlice 
I0330 07:41:23.340]   should support creating EndpointSlice API operations [Conformance]
I0330 07:41:23.340]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.340] [BeforeEach] [sig-network] EndpointSlice
... skipping 30 lines ...
I0330 07:41:23.344] STEP: deleting
I0330 07:41:23.344] STEP: deleting a collection
I0330 07:41:23.344] [AfterEach] [sig-network] EndpointSlice
I0330 07:41:23.345]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.345] Mar 30 05:58:03.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.345] STEP: Destroying namespace "endpointslice-1246" for this suite.
I0330 07:41:23.345] •{"msg":"PASSED [sig-network] EndpointSlice should support creating EndpointSlice API operations [Conformance]","total":340,"completed":11,"skipped":218,"failed":0}
I0330 07:41:23.345] SSSS
I0330 07:41:23.345] ------------------------------
I0330 07:41:23.345] [sig-node] Lease 
I0330 07:41:23.345]   lease API should be available [Conformance]
I0330 07:41:23.345]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.345] [BeforeEach] [sig-node] Lease
... skipping 11 lines ...
I0330 07:41:23.348]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.348] I0330 05:58:03.828562      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.348] [AfterEach] [sig-node] Lease
I0330 07:41:23.348]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.348] Mar 30 05:58:03.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.348] STEP: Destroying namespace "lease-test-8358" for this suite.
I0330 07:41:23.348] •{"msg":"PASSED [sig-node] Lease lease API should be available [Conformance]","total":340,"completed":12,"skipped":222,"failed":0}
I0330 07:41:23.348] SSSSSSSSSSSSSSSSS
I0330 07:41:23.348] ------------------------------
I0330 07:41:23.349] [sig-node] KubeletManagedEtcHosts 
I0330 07:41:23.349]   should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.349]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.349] [BeforeEach] [sig-node] KubeletManagedEtcHosts
... skipping 52 lines ...
I0330 07:41:23.358] Mar 30 05:58:08.509: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:23.358] Mar 30 05:58:08.570: INFO: Exec stderr: ""
I0330 07:41:23.358] [AfterEach] [sig-node] KubeletManagedEtcHosts
I0330 07:41:23.359]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.359] Mar 30 05:58:08.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.359] STEP: Destroying namespace "e2e-kubelet-etc-hosts-1262" for this suite.
I0330 07:41:23.359] •{"msg":"PASSED [sig-node] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":13,"skipped":239,"failed":0}
I0330 07:41:23.359] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.359] ------------------------------
I0330 07:41:23.359] [sig-api-machinery] Watchers 
I0330 07:41:23.360]   should observe add, update, and delete watch notifications on configmaps [Conformance]
I0330 07:41:23.360]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.360] [BeforeEach] [sig-api-machinery] Watchers
... skipping 39 lines ...
I0330 07:41:23.370] • [SLOW TEST:60.089 seconds]
I0330 07:41:23.370] [sig-api-machinery] Watchers
I0330 07:41:23.370] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:23.370]   should observe add, update, and delete watch notifications on configmaps [Conformance]
I0330 07:41:23.370]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.370] ------------------------------
I0330 07:41:23.370] {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":340,"completed":14,"skipped":272,"failed":0}
I0330 07:41:23.371] SSSSSSS
I0330 07:41:23.371] ------------------------------
I0330 07:41:23.371] [sig-apps] ReplicationController 
I0330 07:41:23.371]   should release no longer matching pods [Conformance]
I0330 07:41:23.371]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.371] [BeforeEach] [sig-apps] ReplicationController
... skipping 25 lines ...
I0330 07:41:23.374] • [SLOW TEST:6.060 seconds]
I0330 07:41:23.375] [sig-apps] ReplicationController
I0330 07:41:23.375] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:23.375]   should release no longer matching pods [Conformance]
I0330 07:41:23.375]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.375] ------------------------------
I0330 07:41:23.375] {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":340,"completed":15,"skipped":279,"failed":0}
I0330 07:41:23.375] SS
I0330 07:41:23.375] ------------------------------
I0330 07:41:23.375] [sig-network] Services 
I0330 07:41:23.376]   should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
I0330 07:41:23.376]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.376] [BeforeEach] [sig-network] Services
... skipping 97 lines ...
I0330 07:41:23.401] • [SLOW TEST:18.588 seconds]
I0330 07:41:23.401] [sig-network] Services
I0330 07:41:23.401] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:23.402]   should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]
I0330 07:41:23.402]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.402] ------------------------------
I0330 07:41:23.402] {"msg":"PASSED [sig-network] Services should be able to switch session affinity for NodePort service [LinuxOnly] [Conformance]","total":340,"completed":16,"skipped":281,"failed":0}
I0330 07:41:23.402] SSS
I0330 07:41:23.402] ------------------------------
I0330 07:41:23.402] [sig-storage] Projected configMap 
I0330 07:41:23.403]   optional updates should be reflected in volume [NodeConformance] [Conformance]
I0330 07:41:23.403]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.403] [BeforeEach] [sig-storage] Projected configMap
... skipping 27 lines ...
I0330 07:41:23.407] • [SLOW TEST:6.134 seconds]
I0330 07:41:23.407] [sig-storage] Projected configMap
I0330 07:41:23.407] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
I0330 07:41:23.407]   optional updates should be reflected in volume [NodeConformance] [Conformance]
I0330 07:41:23.407]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.407] ------------------------------
I0330 07:41:23.408] {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":340,"completed":17,"skipped":284,"failed":0}
I0330 07:41:23.408] S
I0330 07:41:23.408] ------------------------------
I0330 07:41:23.408] [sig-storage] EmptyDir volumes 
I0330 07:41:23.408]   pod should support shared volumes between containers [Conformance]
I0330 07:41:23.408]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.408] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 16 lines ...
I0330 07:41:23.411] Mar 30 05:59:41.496: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:23.411] Mar 30 05:59:41.553: INFO: Exec stderr: ""
I0330 07:41:23.412] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:23.412]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.412] Mar 30 05:59:41.554: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.412] STEP: Destroying namespace "emptydir-4689" for this suite.
I0330 07:41:23.412] •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":340,"completed":18,"skipped":285,"failed":0}
I0330 07:41:23.413] SSS
I0330 07:41:23.413] ------------------------------
I0330 07:41:23.413] [sig-network] Networking Granular Checks: Pods 
I0330 07:41:23.413]   should function for intra-pod communication: http [NodeConformance] [Conformance]
I0330 07:41:23.413]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.413] [BeforeEach] [sig-network] Networking
... skipping 50 lines ...
I0330 07:41:23.421] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
I0330 07:41:23.421]   Granular Checks: Pods
I0330 07:41:23.421]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
I0330 07:41:23.421]     should function for intra-pod communication: http [NodeConformance] [Conformance]
I0330 07:41:23.421]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.422] ------------------------------
I0330 07:41:23.422] {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":340,"completed":19,"skipped":288,"failed":0}
I0330 07:41:23.422] SSSSSSSSSS
I0330 07:41:23.422] ------------------------------
I0330 07:41:23.422] [sig-network] Ingress API 
I0330 07:41:23.422]   should support creating Ingress API operations [Conformance]
I0330 07:41:23.423]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.423] [BeforeEach] [sig-network] Ingress API
... skipping 31 lines ...
I0330 07:41:23.428] STEP: deleting
I0330 07:41:23.428] STEP: deleting a collection
I0330 07:41:23.428] [AfterEach] [sig-network] Ingress API
I0330 07:41:23.429]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.429] Mar 30 06:00:05.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.429] STEP: Destroying namespace "ingress-3773" for this suite.
I0330 07:41:23.429] •{"msg":"PASSED [sig-network] Ingress API should support creating Ingress API operations [Conformance]","total":340,"completed":20,"skipped":298,"failed":0}
I0330 07:41:23.429] SS
I0330 07:41:23.429] ------------------------------
I0330 07:41:23.430] [sig-instrumentation] Events API 
I0330 07:41:23.430]   should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
I0330 07:41:23.430]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.430] [BeforeEach] [sig-instrumentation] Events API
... skipping 26 lines ...
I0330 07:41:23.436] STEP: listing events in all namespaces
I0330 07:41:23.436] STEP: listing events in test namespace
I0330 07:41:23.436] [AfterEach] [sig-instrumentation] Events API
I0330 07:41:23.436]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.436] Mar 30 06:00:05.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.436] STEP: Destroying namespace "events-6018" for this suite.
I0330 07:41:23.437] •{"msg":"PASSED [sig-instrumentation] Events API should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":340,"completed":21,"skipped":300,"failed":0}
I0330 07:41:23.437] SSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.437] ------------------------------
I0330 07:41:23.437] [sig-storage] EmptyDir volumes 
I0330 07:41:23.437]   should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.437]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.438] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:23.440] I0330 06:00:05.987245      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.440] I0330 06:00:05.987272      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.441] [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.441]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.441] I0330 06:00:05.989457      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.441] STEP: Creating a pod to test emptydir 0644 on tmpfs
I0330 07:41:23.442] Mar 30 06:00:05.994: INFO: Waiting up to 5m0s for pod "pod-f0c153e4-43b2-41a8-a791-a60439ae0fd1" in namespace "emptydir-7066" to be "Succeeded or Failed"
I0330 07:41:23.442] Mar 30 06:00:06.002: INFO: Pod "pod-f0c153e4-43b2-41a8-a791-a60439ae0fd1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.273576ms
I0330 07:41:23.442] Mar 30 06:00:08.006: INFO: Pod "pod-f0c153e4-43b2-41a8-a791-a60439ae0fd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011271407s
I0330 07:41:23.442] STEP: Saw pod success
I0330 07:41:23.442] Mar 30 06:00:08.006: INFO: Pod "pod-f0c153e4-43b2-41a8-a791-a60439ae0fd1" satisfied condition "Succeeded or Failed"
I0330 07:41:23.442] Mar 30 06:00:08.008: INFO: Trying to get logs from node kind-worker pod pod-f0c153e4-43b2-41a8-a791-a60439ae0fd1 container test-container: <nil>
I0330 07:41:23.442] STEP: delete the pod
I0330 07:41:23.443] Mar 30 06:00:08.033: INFO: Waiting for pod pod-f0c153e4-43b2-41a8-a791-a60439ae0fd1 to disappear
I0330 07:41:23.443] Mar 30 06:00:08.035: INFO: Pod pod-f0c153e4-43b2-41a8-a791-a60439ae0fd1 no longer exists
I0330 07:41:23.443] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:23.443]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.443] Mar 30 06:00:08.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.444] STEP: Destroying namespace "emptydir-7066" for this suite.
I0330 07:41:23.444] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":22,"skipped":320,"failed":0}
I0330 07:41:23.444] SSS
I0330 07:41:23.444] ------------------------------
I0330 07:41:23.444] [sig-node] PreStop 
I0330 07:41:23.444]   should call prestop when killing a pod  [Conformance]
I0330 07:41:23.445]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.445] [BeforeEach] [sig-node] PreStop
... skipping 39 lines ...
I0330 07:41:23.453] • [SLOW TEST:11.087 seconds]
I0330 07:41:23.453] [sig-node] PreStop
I0330 07:41:23.453] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
I0330 07:41:23.454]   should call prestop when killing a pod  [Conformance]
I0330 07:41:23.454]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.454] ------------------------------
I0330 07:41:23.454] {"msg":"PASSED [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":340,"completed":23,"skipped":323,"failed":0}
I0330 07:41:23.454] SSSS
I0330 07:41:23.455] ------------------------------
I0330 07:41:23.455] [sig-node] ConfigMap 
I0330 07:41:23.455]   should be consumable via environment variable [NodeConformance] [Conformance]
I0330 07:41:23.455]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.455] [BeforeEach] [sig-node] ConfigMap
... skipping 9 lines ...
I0330 07:41:23.457] I0330 06:00:19.152675      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.457] [It] should be consumable via environment variable [NodeConformance] [Conformance]
I0330 07:41:23.458]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.458] I0330 06:00:19.155075      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.458] STEP: Creating configMap configmap-4729/configmap-test-b2c966ab-e5bb-4b63-960e-0608777ea82c
I0330 07:41:23.458] STEP: Creating a pod to test consume configMaps
I0330 07:41:23.458] Mar 30 06:00:19.164: INFO: Waiting up to 5m0s for pod "pod-configmaps-a7418fa9-568d-47b8-8ac4-46acd03207cc" in namespace "configmap-4729" to be "Succeeded or Failed"
I0330 07:41:23.459] Mar 30 06:00:19.167: INFO: Pod "pod-configmaps-a7418fa9-568d-47b8-8ac4-46acd03207cc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.134609ms
I0330 07:41:23.459] Mar 30 06:00:21.172: INFO: Pod "pod-configmaps-a7418fa9-568d-47b8-8ac4-46acd03207cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008583203s
I0330 07:41:23.459] STEP: Saw pod success
I0330 07:41:23.459] Mar 30 06:00:21.172: INFO: Pod "pod-configmaps-a7418fa9-568d-47b8-8ac4-46acd03207cc" satisfied condition "Succeeded or Failed"
I0330 07:41:23.460] Mar 30 06:00:21.175: INFO: Trying to get logs from node kind-worker pod pod-configmaps-a7418fa9-568d-47b8-8ac4-46acd03207cc container env-test: <nil>
I0330 07:41:23.460] STEP: delete the pod
I0330 07:41:23.460] Mar 30 06:00:21.189: INFO: Waiting for pod pod-configmaps-a7418fa9-568d-47b8-8ac4-46acd03207cc to disappear
I0330 07:41:23.460] Mar 30 06:00:21.191: INFO: Pod pod-configmaps-a7418fa9-568d-47b8-8ac4-46acd03207cc no longer exists
I0330 07:41:23.460] [AfterEach] [sig-node] ConfigMap
I0330 07:41:23.461]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.461] Mar 30 06:00:21.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.461] STEP: Destroying namespace "configmap-4729" for this suite.
I0330 07:41:23.461] •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":340,"completed":24,"skipped":327,"failed":0}
I0330 07:41:23.461] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.462] ------------------------------
I0330 07:41:23.462] [sig-network] IngressClass API 
I0330 07:41:23.462]    should support creating IngressClass API operations [Conformance]
I0330 07:41:23.462]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.463] [BeforeEach] [sig-network] IngressClass API
... skipping 27 lines ...
I0330 07:41:23.468] STEP: deleting
I0330 07:41:23.468] STEP: deleting a collection
I0330 07:41:23.468] [AfterEach] [sig-network] IngressClass API
I0330 07:41:23.468]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.468] Mar 30 06:00:21.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.469] STEP: Destroying namespace "ingressclass-1244" for this suite.
I0330 07:41:23.469] •{"msg":"PASSED [sig-network] IngressClass API  should support creating IngressClass API operations [Conformance]","total":340,"completed":25,"skipped":382,"failed":0}
I0330 07:41:23.469] SSSSSSSSSSSSSSSSSS
I0330 07:41:23.469] ------------------------------
I0330 07:41:23.469] [sig-cli] Kubectl client Kubectl describe 
I0330 07:41:23.469]   should check if kubectl describe prints relevant information for rc and pods  [Conformance]
I0330 07:41:23.469]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.470] [BeforeEach] [sig-cli] Kubectl client
... skipping 28 lines ...
I0330 07:41:23.474] Mar 30 06:00:23.852: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
I0330 07:41:23.474] Mar 30 06:00:23.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-963336331 --namespace=kubectl-2664 describe pod agnhost-primary-9z9d2'
I0330 07:41:23.474] Mar 30 06:00:23.952: INFO: stderr: ""
I0330 07:41:23.476] Mar 30 06:00:23.952: INFO: stdout: "Name:         agnhost-primary-9z9d2\nNamespace:    kubectl-2664\nPriority:     0\nNode:         kind-worker/172.18.0.4\nStart Time:   Tue, 30 Mar 2021 06:00:21 +0000\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nStatus:       Running\nIP:           10.244.1.18\nIPs:\n  IP:           10.244.1.18\nControlled By:  ReplicationController/agnhost-primary\nContainers:\n  agnhost-primary:\n    Container ID:   containerd://6b08d620c72e4a5799d88f334ddeebfda5a2b6cd8d2eb0f0ada28cfbf1375969\n    Image:          k8s.gcr.io/e2e-test-images/agnhost:2.30\n    Image ID:       k8s.gcr.io/e2e-test-images/agnhost@sha256:4f373cad92ff988ccba06667141ba47111d855ab4d5087296a2c60b3a3d0da53\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 30 Mar 2021 06:00:22 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    <none>\n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-j8q6h (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  kube-api-access-j8q6h:\n    Type:                    Projected (a volume that contains injected data from multiple sources)\n    TokenExpirationSeconds:  3607\n    ConfigMapName:           kube-root-ca.crt\n    ConfigMapOptional:       <nil>\n    DownwardAPI:             true\nQoS Class:                   BestEffort\nNode-Selectors:              <none>\nTolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s\n                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s\nEvents:\n  Type    Reason     Age   From               Message\n  ----    ------     ----  ----               -------\n  Normal  Scheduled  2s    default-scheduler  Successfully assigned kubectl-2664/agnhost-primary-9z9d2 to kind-worker\n  Normal  Pulled     1s    kubelet            Container image \"k8s.gcr.io/e2e-test-images/agnhost:2.30\" already present on machine\n  Normal  Created    1s    kubelet            Created container agnhost-primary\n  Normal  Started    1s    kubelet            Started container agnhost-primary\n"
I0330 07:41:23.476] Mar 30 06:00:23.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-963336331 --namespace=kubectl-2664 describe rc agnhost-primary'
I0330 07:41:23.476] Mar 30 06:00:24.066: INFO: stderr: ""
I0330 07:41:23.477] Mar 30 06:00:24.066: INFO: stdout: "Name:         agnhost-primary\nNamespace:    kubectl-2664\nSelector:     app=agnhost,role=primary\nLabels:       app=agnhost\n              role=primary\nAnnotations:  <none>\nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=primary\n  Containers:\n   agnhost-primary:\n    Image:        k8s.gcr.io/e2e-test-images/agnhost:2.30\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  <none>\n    Mounts:       <none>\n  Volumes:        <none>\nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  3s    replication-controller  Created pod: agnhost-primary-9z9d2\n"
I0330 07:41:23.477] Mar 30 06:00:24.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-963336331 --namespace=kubectl-2664 describe service agnhost-primary'
I0330 07:41:23.477] Mar 30 06:00:24.174: INFO: stderr: ""
I0330 07:41:23.478] Mar 30 06:00:24.174: INFO: stdout: "Name:              agnhost-primary\nNamespace:         kubectl-2664\nLabels:            app=agnhost\n                   role=primary\nAnnotations:       <none>\nSelector:          app=agnhost,role=primary\nType:              ClusterIP\nIP Family Policy:  SingleStack\nIP Families:       IPv4\nIP:                10.96.101.169\nIPs:               10.96.101.169\nPort:              <unset>  6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.18:6379\nSession Affinity:  None\nEvents:            <none>\n"
I0330 07:41:23.478] Mar 30 06:00:24.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-963336331 --namespace=kubectl-2664 describe node kind-control-plane'
I0330 07:41:23.478] Mar 30 06:00:24.307: INFO: stderr: ""
I0330 07:41:23.484] Mar 30 06:00:24.307: INFO: stdout: "Name:               kind-control-plane\nRoles:              control-plane,master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kind-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/control-plane=\n                    node-role.kubernetes.io/master=\n                    node.kubernetes.io/exclude-from-external-load-balancers=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Tue, 30 Mar 2021 05:48:17 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kind-control-plane\n  AcquireTime:     <unset>\n  RenewTime:       Tue, 30 Mar 2021 06:00:15 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Tue, 30 Mar 2021 05:55:25 +0000   Tue, 30 Mar 2021 05:48:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Tue, 30 Mar 2021 05:55:25 +0000   Tue, 30 Mar 2021 05:48:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Tue, 30 Mar 2021 05:55:25 +0000   Tue, 30 Mar 2021 05:48:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Tue, 30 Mar 2021 05:55:25 +0000   Tue, 30 Mar 2021 05:48:54 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.3\n  Hostname:    kind-control-plane\nCapacity:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65966628Ki\n  pods:               110\nAllocatable:\n  cpu:                8\n  ephemeral-storage:  253882800Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             65966628Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 321dbe73732e419390a0143bde0810a5\n  System UUID:                a45c79df-b195-4cab-8065-28e502efecbc\n  Boot ID:                    043ee48f-30d1-4448-9cd3-ca139dcaa605\n  Kernel Version:             5.0.0-1047-gke\n  OS Image:                   Ubuntu 20.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.5.0-beta.4-91-g1b05b605c\n  Kubelet Version:            v1.22.0-alpha.0.18+467457557005e0\n  Kube-Proxy Version:         v1.22.0-alpha.0.18+467457557005e0\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nProviderID:                   kind://docker/kind/kind-control-plane\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-558bd4d5db-bd4vs                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m\n  kube-system                 coredns-558bd4d5db-nlm6g                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m\n  kube-system                 etcd-kind-control-plane                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m\n  kube-system                 kindnet-lp7xl                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m\n  kube-system                 kube-apiserver-kind-control-plane             250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m\n  kube-system                 kube-controller-manager-kind-control-plane    200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m\n  kube-system                 kube-proxy-5nhrf                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m\n  kube-system                 kube-scheduler-kind-control-plane             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m\n  local-path-storage          local-path-provisioner-78776bfc44-tsczm       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                950m (11%)  100m (1%)\n  memory             290Mi (0%)  390Mi (0%)\n  ephemeral-storage  100Mi (0%)  0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:\n  Type     Reason                    Age                From        Message\n  ----     ------                    ----               ----        -------\n  Normal   NodeHasNoDiskPressure     13m (x4 over 13m)  kubelet     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      13m (x4 over 13m)  kubelet     Node kind-control-plane status is now: NodeHasSufficientPID\n  Normal   NodeHasSufficientMemory   13m (x5 over 13m)  kubelet     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   Starting                  12m                kubelet     Starting kubelet.\n  Warning  CheckLimitsForResolvConf  12m                kubelet     Resolv.conf file '/etc/resolv.conf' contains search line consisting of more than 3 domains!\n  Normal   NodeHasSufficientMemory   12m                kubelet     Node kind-control-plane status is now: NodeHasSufficientMemory\n  Normal   NodeHasNoDiskPressure     12m                kubelet     Node kind-control-plane status is now: NodeHasNoDiskPressure\n  Normal   NodeHasSufficientPID      12m                kubelet     Node kind-control-plane status is now: NodeHasSufficientPID\n  Normal   NodeAllocatableEnforced   12m                kubelet     Updated Node Allocatable limit across pods\n  Normal   Starting                  11m                kube-proxy  Starting kube-proxy.\n  Normal   NodeReady                 11m                kubelet     Node kind-control-plane status is now: NodeReady\n"
I0330 07:41:23.485] Mar 30 06:00:24.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-963336331 --namespace=kubectl-2664 describe namespace kubectl-2664'
I0330 07:41:23.485] Mar 30 06:00:24.406: INFO: stderr: ""
I0330 07:41:23.485] Mar 30 06:00:24.406: INFO: stdout: "Name:         kubectl-2664\nLabels:       e2e-framework=kubectl\n              e2e-run=960868dd-7445-4c5d-9f79-84cfc4032f64\n              kubernetes.io/metadata.name=kubectl-2664\nAnnotations:  <none>\nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
I0330 07:41:23.486] [AfterEach] [sig-cli] Kubectl client
I0330 07:41:23.486]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.486] Mar 30 06:00:24.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.487] STEP: Destroying namespace "kubectl-2664" for this suite.
I0330 07:41:23.487] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":340,"completed":26,"skipped":400,"failed":0}
I0330 07:41:23.487] SSSSSSSSS
I0330 07:41:23.488] ------------------------------
I0330 07:41:23.488] [sig-cli] Kubectl client Kubectl expose 
I0330 07:41:23.488]   should create services for rc  [Conformance]
I0330 07:41:23.488]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.488] [BeforeEach] [sig-cli] Kubectl client
... skipping 49 lines ...
I0330 07:41:23.497] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0330 07:41:23.497]   Kubectl expose
I0330 07:41:23.498]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1223
I0330 07:41:23.498]     should create services for rc  [Conformance]
I0330 07:41:23.498]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.498] ------------------------------
I0330 07:41:23.498] {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":340,"completed":27,"skipped":409,"failed":0}
I0330 07:41:23.498] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.498] ------------------------------
I0330 07:41:23.498] [sig-apps] CronJob 
I0330 07:41:23.499]   should replace jobs when ReplaceConcurrent [Conformance]
I0330 07:41:23.499]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.499] [BeforeEach] [sig-apps] CronJob
... skipping 27 lines ...
I0330 07:41:23.506] • [SLOW TEST:90.063 seconds]
I0330 07:41:23.506] [sig-apps] CronJob
I0330 07:41:23.506] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:23.506]   should replace jobs when ReplaceConcurrent [Conformance]
I0330 07:41:23.507]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.507] ------------------------------
I0330 07:41:23.507] {"msg":"PASSED [sig-apps] CronJob should replace jobs when ReplaceConcurrent [Conformance]","total":340,"completed":28,"skipped":442,"failed":0}
I0330 07:41:23.507] SS
I0330 07:41:23.507] ------------------------------
I0330 07:41:23.507] [sig-network] Services 
I0330 07:41:23.508]   should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
I0330 07:41:23.508]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.508] [BeforeEach] [sig-network] Services
... skipping 91 lines ...
I0330 07:41:23.528] • [SLOW TEST:22.415 seconds]
I0330 07:41:23.529] [sig-network] Services
I0330 07:41:23.529] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:23.529]   should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]
I0330 07:41:23.529]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.529] ------------------------------
I0330 07:41:23.530] {"msg":"PASSED [sig-network] Services should be able to switch session affinity for service with type clusterIP [LinuxOnly] [Conformance]","total":340,"completed":29,"skipped":444,"failed":0}
I0330 07:41:23.530] SSSSSSSSSSSSSSSSSSS
I0330 07:41:23.530] ------------------------------
I0330 07:41:23.530] [sig-storage] EmptyDir volumes 
I0330 07:41:23.530]   should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.531]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.531] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:23.532] I0330 06:02:23.563695      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.532] I0330 06:02:23.563719      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.533] [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.533]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.533] I0330 06:02:23.566493      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.533] STEP: Creating a pod to test emptydir 0666 on node default medium
I0330 07:41:23.534] Mar 30 06:02:23.573: INFO: Waiting up to 5m0s for pod "pod-541b69b0-5b5a-4a26-a919-386458e3316c" in namespace "emptydir-3565" to be "Succeeded or Failed"
I0330 07:41:23.534] Mar 30 06:02:23.577: INFO: Pod "pod-541b69b0-5b5a-4a26-a919-386458e3316c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.833666ms
I0330 07:41:23.534] Mar 30 06:02:25.583: INFO: Pod "pod-541b69b0-5b5a-4a26-a919-386458e3316c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009454637s
I0330 07:41:23.534] STEP: Saw pod success
I0330 07:41:23.534] Mar 30 06:02:25.583: INFO: Pod "pod-541b69b0-5b5a-4a26-a919-386458e3316c" satisfied condition "Succeeded or Failed"
I0330 07:41:23.535] Mar 30 06:02:25.585: INFO: Trying to get logs from node kind-worker pod pod-541b69b0-5b5a-4a26-a919-386458e3316c container test-container: <nil>
I0330 07:41:23.535] STEP: delete the pod
I0330 07:41:23.535] Mar 30 06:02:25.608: INFO: Waiting for pod pod-541b69b0-5b5a-4a26-a919-386458e3316c to disappear
I0330 07:41:23.535] Mar 30 06:02:25.610: INFO: Pod pod-541b69b0-5b5a-4a26-a919-386458e3316c no longer exists
I0330 07:41:23.535] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:23.535]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.536] Mar 30 06:02:25.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.536] STEP: Destroying namespace "emptydir-3565" for this suite.
I0330 07:41:23.536] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":30,"skipped":463,"failed":0}
I0330 07:41:23.536] SSSSSSSSSSSSSSSSSS
I0330 07:41:23.536] ------------------------------
I0330 07:41:23.536] [sig-api-machinery] Discovery 
I0330 07:41:23.537]   should validate PreferredVersion for each APIGroup [Conformance]
I0330 07:41:23.537]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.537] [BeforeEach] [sig-api-machinery] Discovery
... skipping 94 lines ...
I0330 07:41:23.550] Mar 30 06:02:26.065: INFO: Versions found [{flowcontrol.apiserver.k8s.io/v1beta1 v1beta1}]
I0330 07:41:23.550] Mar 30 06:02:26.065: INFO: flowcontrol.apiserver.k8s.io/v1beta1 matches flowcontrol.apiserver.k8s.io/v1beta1
I0330 07:41:23.551] [AfterEach] [sig-api-machinery] Discovery
I0330 07:41:23.551]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.551] Mar 30 06:02:26.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.551] STEP: Destroying namespace "discovery-8531" for this suite.
I0330 07:41:23.551] •{"msg":"PASSED [sig-api-machinery] Discovery should validate PreferredVersion for each APIGroup [Conformance]","total":340,"completed":31,"skipped":481,"failed":0}
I0330 07:41:23.551] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.551] ------------------------------
I0330 07:41:23.552] [sig-node] Pods 
I0330 07:41:23.552]   should be submitted and removed [NodeConformance] [Conformance]
I0330 07:41:23.552]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.552] [BeforeEach] [sig-node] Pods
... skipping 31 lines ...
I0330 07:41:23.557] • [SLOW TEST:17.170 seconds]
I0330 07:41:23.557] [sig-node] Pods
I0330 07:41:23.558] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:23.558]   should be submitted and removed [NodeConformance] [Conformance]
I0330 07:41:23.558]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.558] ------------------------------
I0330 07:41:23.558] {"msg":"PASSED [sig-node] Pods should be submitted and removed [NodeConformance] [Conformance]","total":340,"completed":32,"skipped":521,"failed":0}
I0330 07:41:23.559] SS
I0330 07:41:23.559] ------------------------------
I0330 07:41:23.559] [sig-node] Container Runtime blackbox test on terminated container 
I0330 07:41:23.559]   should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0330 07:41:23.560]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.560] [BeforeEach] [sig-node] Container Runtime
... skipping 8 lines ...
I0330 07:41:23.562] I0330 06:02:43.273362      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.562] I0330 06:02:43.273379      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.563] [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0330 07:41:23.563]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.563] STEP: create the container
I0330 07:41:23.563] I0330 06:02:43.275904      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.564] STEP: wait for the container to reach Failed
I0330 07:41:23.564] STEP: get the container status
I0330 07:41:23.564] STEP: the container should be terminated
I0330 07:41:23.564] STEP: the termination message should be set
I0330 07:41:23.564] Mar 30 06:02:45.297: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0330 07:41:23.564] STEP: delete the container
I0330 07:41:23.565] [AfterEach] [sig-node] Container Runtime
I0330 07:41:23.565]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.565] Mar 30 06:02:45.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.565] STEP: Destroying namespace "container-runtime-2543" for this suite.
I0330 07:41:23.566] •{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":340,"completed":33,"skipped":523,"failed":0}
I0330 07:41:23.566] SSSSSSSSSSSSS
I0330 07:41:23.566] ------------------------------
I0330 07:41:23.566] [sig-node] Probing container 
I0330 07:41:23.566]   should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
I0330 07:41:23.567]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.567] [BeforeEach] [sig-node] Probing container
... skipping 25 lines ...
I0330 07:41:23.572] • [SLOW TEST:242.698 seconds]
I0330 07:41:23.572] [sig-node] Probing container
I0330 07:41:23.572] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:23.573]   should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
I0330 07:41:23.573]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.573] ------------------------------
I0330 07:41:23.573] {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":340,"completed":34,"skipped":536,"failed":0}
I0330 07:41:23.573] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.573] ------------------------------
I0330 07:41:23.573] [sig-cli] Kubectl client Kubectl run pod 
I0330 07:41:23.574]   should create a pod from an image when restart is Never  [Conformance]
I0330 07:41:23.574]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.574] [BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
I0330 07:41:23.579] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0330 07:41:23.579]   Kubectl run pod
I0330 07:41:23.579]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1511
I0330 07:41:23.579]     should create a pod from an image when restart is Never  [Conformance]
I0330 07:41:23.580]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.580] ------------------------------
I0330 07:41:23.580] {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":340,"completed":35,"skipped":604,"failed":0}
I0330 07:41:23.580] SS
I0330 07:41:23.580] ------------------------------
I0330 07:41:23.580] [sig-auth] ServiceAccounts 
I0330 07:41:23.580]   should run through the lifecycle of a ServiceAccount [Conformance]
I0330 07:41:23.580]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.580] [BeforeEach] [sig-auth] ServiceAccounts
... skipping 16 lines ...
I0330 07:41:23.583] STEP: finding ServiceAccount in list of all ServiceAccounts (by LabelSelector)
I0330 07:41:23.583] STEP: deleting the ServiceAccount
I0330 07:41:23.583] [AfterEach] [sig-auth] ServiceAccounts
I0330 07:41:23.583]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.584] Mar 30 06:07:03.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.584] STEP: Destroying namespace "svcaccounts-8283" for this suite.
I0330 07:41:23.584] •{"msg":"PASSED [sig-auth] ServiceAccounts should run through the lifecycle of a ServiceAccount [Conformance]","total":340,"completed":36,"skipped":606,"failed":0}
I0330 07:41:23.584] SSSS
I0330 07:41:23.584] ------------------------------
I0330 07:41:23.584] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0330 07:41:23.585]   removes definition from spec when one version gets changed to not be served [Conformance]
I0330 07:41:23.585]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.585] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
I0330 07:41:23.589] • [SLOW TEST:15.783 seconds]
I0330 07:41:23.590] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0330 07:41:23.590] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:23.590]   removes definition from spec when one version gets changed to not be served [Conformance]
I0330 07:41:23.590]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.590] ------------------------------
I0330 07:41:23.591] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":340,"completed":37,"skipped":610,"failed":0}
I0330 07:41:23.591] SSSSS
I0330 07:41:23.591] ------------------------------
I0330 07:41:23.591] [sig-node] Variable Expansion 
I0330 07:41:23.591]   should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
I0330 07:41:23.592]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.592] [BeforeEach] [sig-node] Variable Expansion
... skipping 7 lines ...
I0330 07:41:23.593] I0330 06:07:19.379711      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.593] I0330 06:07:19.379837      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.594] I0330 06:07:19.379856      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.594] [It] should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
I0330 07:41:23.594]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.594] I0330 06:07:19.381925      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.595] STEP: creating the pod with failed condition
I0330 07:41:23.595] STEP: updating the pod
I0330 07:41:23.595] Mar 30 06:09:19.912: INFO: Successfully updated pod "var-expansion-96161b27-1413-45e3-b73e-96abf855df4e"
I0330 07:41:23.595] STEP: waiting for pod running
I0330 07:41:23.595] STEP: deleting the pod gracefully
I0330 07:41:23.596] Mar 30 06:09:21.922: INFO: Deleting pod "var-expansion-96161b27-1413-45e3-b73e-96abf855df4e" in namespace "var-expansion-3097"
I0330 07:41:23.596] Mar 30 06:09:21.929: INFO: Wait up to 5m0s for pod "var-expansion-96161b27-1413-45e3-b73e-96abf855df4e" to be fully deleted
... skipping 5 lines ...
I0330 07:41:23.597] • [SLOW TEST:164.590 seconds]
I0330 07:41:23.597] [sig-node] Variable Expansion
I0330 07:41:23.597] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:23.597]   should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]
I0330 07:41:23.598]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.598] ------------------------------
I0330 07:41:23.598] {"msg":"PASSED [sig-node] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [Slow] [Conformance]","total":340,"completed":38,"skipped":615,"failed":0}
I0330 07:41:23.598] SSSSSSSSSSSSSSS
I0330 07:41:23.598] ------------------------------
I0330 07:41:23.599] [sig-storage] Downward API volume 
I0330 07:41:23.599]   should provide podname only [NodeConformance] [Conformance]
I0330 07:41:23.599]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.599] [BeforeEach] [sig-storage] Downward API volume
... skipping 10 lines ...
I0330 07:41:23.602] [BeforeEach] [sig-storage] Downward API volume
I0330 07:41:23.603]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
I0330 07:41:23.603] I0330 06:10:03.975799      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.604] [It] should provide podname only [NodeConformance] [Conformance]
I0330 07:41:23.604]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.604] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:23.604] Mar 30 06:10:03.981: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0ad547a4-2960-4a85-97e3-3bf82a27d9cb" in namespace "downward-api-3530" to be "Succeeded or Failed"
I0330 07:41:23.605] Mar 30 06:10:03.983: INFO: Pod "downwardapi-volume-0ad547a4-2960-4a85-97e3-3bf82a27d9cb": Phase="Pending", Reason="", readiness=false. Elapsed: 1.948605ms
I0330 07:41:23.605] Mar 30 06:10:05.989: INFO: Pod "downwardapi-volume-0ad547a4-2960-4a85-97e3-3bf82a27d9cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00822269s
I0330 07:41:23.605] STEP: Saw pod success
I0330 07:41:23.605] Mar 30 06:10:05.989: INFO: Pod "downwardapi-volume-0ad547a4-2960-4a85-97e3-3bf82a27d9cb" satisfied condition "Succeeded or Failed"
I0330 07:41:23.605] Mar 30 06:10:05.992: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-0ad547a4-2960-4a85-97e3-3bf82a27d9cb container client-container: <nil>
I0330 07:41:23.605] STEP: delete the pod
I0330 07:41:23.606] Mar 30 06:10:06.017: INFO: Waiting for pod downwardapi-volume-0ad547a4-2960-4a85-97e3-3bf82a27d9cb to disappear
I0330 07:41:23.606] Mar 30 06:10:06.020: INFO: Pod downwardapi-volume-0ad547a4-2960-4a85-97e3-3bf82a27d9cb no longer exists
I0330 07:41:23.606] [AfterEach] [sig-storage] Downward API volume
I0330 07:41:23.606]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.606] Mar 30 06:10:06.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.606] STEP: Destroying namespace "downward-api-3530" for this suite.
I0330 07:41:23.607] •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":340,"completed":39,"skipped":630,"failed":0}
I0330 07:41:23.607] S
I0330 07:41:23.607] ------------------------------
I0330 07:41:23.607] [sig-storage] Projected secret 
I0330 07:41:23.607]   should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
I0330 07:41:23.607]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.608] [BeforeEach] [sig-storage] Projected secret
... skipping 9 lines ...
I0330 07:41:23.610] I0330 06:10:06.052936      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.610] [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
I0330 07:41:23.610]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.611] I0330 06:10:06.055762      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.611] STEP: Creating secret with name projected-secret-test-8c674094-0d68-4e68-96f0-fe03a6eb78c4
I0330 07:41:23.611] STEP: Creating a pod to test consume secrets
I0330 07:41:23.611] Mar 30 06:10:06.064: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-58da3c4d-6246-4e16-b412-ed59718187f6" in namespace "projected-1670" to be "Succeeded or Failed"
I0330 07:41:23.611] Mar 30 06:10:06.066: INFO: Pod "pod-projected-secrets-58da3c4d-6246-4e16-b412-ed59718187f6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.120323ms
I0330 07:41:23.612] Mar 30 06:10:08.072: INFO: Pod "pod-projected-secrets-58da3c4d-6246-4e16-b412-ed59718187f6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008463682s
I0330 07:41:23.612] STEP: Saw pod success
I0330 07:41:23.612] Mar 30 06:10:08.073: INFO: Pod "pod-projected-secrets-58da3c4d-6246-4e16-b412-ed59718187f6" satisfied condition "Succeeded or Failed"
I0330 07:41:23.612] Mar 30 06:10:08.075: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-58da3c4d-6246-4e16-b412-ed59718187f6 container secret-volume-test: <nil>
I0330 07:41:23.612] STEP: delete the pod
I0330 07:41:23.613] Mar 30 06:10:08.090: INFO: Waiting for pod pod-projected-secrets-58da3c4d-6246-4e16-b412-ed59718187f6 to disappear
I0330 07:41:23.613] Mar 30 06:10:08.093: INFO: Pod pod-projected-secrets-58da3c4d-6246-4e16-b412-ed59718187f6 no longer exists
I0330 07:41:23.614] [AfterEach] [sig-storage] Projected secret
I0330 07:41:23.614]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.614] Mar 30 06:10:08.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.614] STEP: Destroying namespace "projected-1670" for this suite.
I0330 07:41:23.615] •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":340,"completed":40,"skipped":631,"failed":0}
I0330 07:41:23.615] SSSSSS
I0330 07:41:23.615] ------------------------------
I0330 07:41:23.615] [sig-node] Secrets 
I0330 07:41:23.615]   should patch a secret [Conformance]
I0330 07:41:23.616]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.616] [BeforeEach] [sig-node] Secrets
... skipping 16 lines ...
I0330 07:41:23.620] STEP: deleting the secret using a LabelSelector
I0330 07:41:23.620] STEP: listing secrets in all namespaces, searching for label name and value in patch
I0330 07:41:23.620] [AfterEach] [sig-node] Secrets
I0330 07:41:23.621]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.621] Mar 30 06:10:08.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.621] STEP: Destroying namespace "secrets-7727" for this suite.
I0330 07:41:23.621] •{"msg":"PASSED [sig-node] Secrets should patch a secret [Conformance]","total":340,"completed":41,"skipped":637,"failed":0}
I0330 07:41:23.621] 
I0330 07:41:23.621] ------------------------------
I0330 07:41:23.622] [sig-api-machinery] Garbage collector 
I0330 07:41:23.622]   should not be blocked by dependency circle [Conformance]
I0330 07:41:23.622]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.622] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 21 lines ...
I0330 07:41:23.627] • [SLOW TEST:5.093 seconds]
I0330 07:41:23.627] [sig-api-machinery] Garbage collector
I0330 07:41:23.628] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:23.628]   should not be blocked by dependency circle [Conformance]
I0330 07:41:23.628]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.628] ------------------------------
I0330 07:41:23.628] {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":340,"completed":42,"skipped":637,"failed":0}
I0330 07:41:23.628] SSSSSSSSSSSSSSS
I0330 07:41:23.628] ------------------------------
I0330 07:41:23.628] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:23.629]   should mutate configmap [Conformance]
I0330 07:41:23.629]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.629] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
I0330 07:41:23.633]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.634] Mar 30 06:10:16.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.634] STEP: Destroying namespace "webhook-670" for this suite.
I0330 07:41:23.634] STEP: Destroying namespace "webhook-670-markers" for this suite.
I0330 07:41:23.634] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:23.635]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0330 07:41:23.635] •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":340,"completed":43,"skipped":652,"failed":0}
I0330 07:41:23.635] SSS
I0330 07:41:23.635] ------------------------------
I0330 07:41:23.635] [sig-storage] ConfigMap 
I0330 07:41:23.636]   updates should be reflected in volume [NodeConformance] [Conformance]
I0330 07:41:23.636]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.636] [BeforeEach] [sig-storage] ConfigMap
... skipping 17 lines ...
I0330 07:41:23.640] STEP: Updating configmap configmap-test-upd-55c0afb5-0520-48e0-8d7f-4c64977fa50c
I0330 07:41:23.640] STEP: waiting to observe update in volume
I0330 07:41:23.641] [AfterEach] [sig-storage] ConfigMap
I0330 07:41:23.641]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.641] Mar 30 06:10:21.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.641] STEP: Destroying namespace "configmap-3337" for this suite.
I0330 07:41:23.641] •{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":340,"completed":44,"skipped":655,"failed":0}
I0330 07:41:23.641] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.641] ------------------------------
I0330 07:41:23.641] [sig-node] Downward API 
I0330 07:41:23.642]   should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
I0330 07:41:23.642]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.642] [BeforeEach] [sig-node] Downward API
... skipping 8 lines ...
I0330 07:41:23.643] I0330 06:10:21.122216      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.643] I0330 06:10:21.122240      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.643] [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
I0330 07:41:23.644]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.644] I0330 06:10:21.124563      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.644] STEP: Creating a pod to test downward api env vars
I0330 07:41:23.644] Mar 30 06:10:21.130: INFO: Waiting up to 5m0s for pod "downward-api-8480bb91-9a0c-4722-9b01-918dd9e665b8" in namespace "downward-api-8200" to be "Succeeded or Failed"
I0330 07:41:23.644] Mar 30 06:10:21.132: INFO: Pod "downward-api-8480bb91-9a0c-4722-9b01-918dd9e665b8": Phase="Pending", Reason="", readiness=false. Elapsed: 1.989358ms
I0330 07:41:23.644] Mar 30 06:10:23.137: INFO: Pod "downward-api-8480bb91-9a0c-4722-9b01-918dd9e665b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006815007s
I0330 07:41:23.644] STEP: Saw pod success
I0330 07:41:23.645] Mar 30 06:10:23.137: INFO: Pod "downward-api-8480bb91-9a0c-4722-9b01-918dd9e665b8" satisfied condition "Succeeded or Failed"
I0330 07:41:23.645] Mar 30 06:10:23.139: INFO: Trying to get logs from node kind-worker2 pod downward-api-8480bb91-9a0c-4722-9b01-918dd9e665b8 container dapi-container: <nil>
I0330 07:41:23.645] STEP: delete the pod
I0330 07:41:23.645] Mar 30 06:10:23.152: INFO: Waiting for pod downward-api-8480bb91-9a0c-4722-9b01-918dd9e665b8 to disappear
I0330 07:41:23.645] Mar 30 06:10:23.154: INFO: Pod downward-api-8480bb91-9a0c-4722-9b01-918dd9e665b8 no longer exists
I0330 07:41:23.645] [AfterEach] [sig-node] Downward API
I0330 07:41:23.646]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.646] Mar 30 06:10:23.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.646] STEP: Destroying namespace "downward-api-8200" for this suite.
I0330 07:41:23.646] •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":340,"completed":45,"skipped":696,"failed":0}
I0330 07:41:23.646] SSSSSS
I0330 07:41:23.646] ------------------------------
I0330 07:41:23.646] [sig-storage] Projected downwardAPI 
I0330 07:41:23.646]   should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
I0330 07:41:23.647]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.647] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 10 lines ...
I0330 07:41:23.649] [BeforeEach] [sig-storage] Projected downwardAPI
I0330 07:41:23.649]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
I0330 07:41:23.650] I0330 06:10:23.185955      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.650] [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
I0330 07:41:23.650]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.650] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:23.650] Mar 30 06:10:23.191: INFO: Waiting up to 5m0s for pod "downwardapi-volume-708a0cea-22a0-4bd3-bfb3-0dac34a95a32" in namespace "projected-2622" to be "Succeeded or Failed"
I0330 07:41:23.651] Mar 30 06:10:23.193: INFO: Pod "downwardapi-volume-708a0cea-22a0-4bd3-bfb3-0dac34a95a32": Phase="Pending", Reason="", readiness=false. Elapsed: 1.97267ms
I0330 07:41:23.651] Mar 30 06:10:25.199: INFO: Pod "downwardapi-volume-708a0cea-22a0-4bd3-bfb3-0dac34a95a32": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007893603s
I0330 07:41:23.651] STEP: Saw pod success
I0330 07:41:23.651] Mar 30 06:10:25.199: INFO: Pod "downwardapi-volume-708a0cea-22a0-4bd3-bfb3-0dac34a95a32" satisfied condition "Succeeded or Failed"
I0330 07:41:23.651] Mar 30 06:10:25.202: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-708a0cea-22a0-4bd3-bfb3-0dac34a95a32 container client-container: <nil>
I0330 07:41:23.651] STEP: delete the pod
I0330 07:41:23.652] Mar 30 06:10:25.216: INFO: Waiting for pod downwardapi-volume-708a0cea-22a0-4bd3-bfb3-0dac34a95a32 to disappear
I0330 07:41:23.652] Mar 30 06:10:25.218: INFO: Pod downwardapi-volume-708a0cea-22a0-4bd3-bfb3-0dac34a95a32 no longer exists
I0330 07:41:23.652] [AfterEach] [sig-storage] Projected downwardAPI
I0330 07:41:23.652]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.652] Mar 30 06:10:25.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.652] STEP: Destroying namespace "projected-2622" for this suite.
I0330 07:41:23.652] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":340,"completed":46,"skipped":702,"failed":0}
I0330 07:41:23.653] S
I0330 07:41:23.653] ------------------------------
I0330 07:41:23.653] [sig-storage] Projected downwardAPI 
I0330 07:41:23.653]   should update annotations on modification [NodeConformance] [Conformance]
I0330 07:41:23.653]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.653] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 24 lines ...
I0330 07:41:23.657] • [SLOW TEST:6.583 seconds]
I0330 07:41:23.657] [sig-storage] Projected downwardAPI
I0330 07:41:23.657] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
I0330 07:41:23.657]   should update annotations on modification [NodeConformance] [Conformance]
I0330 07:41:23.657]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.657] ------------------------------
I0330 07:41:23.657] {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":340,"completed":47,"skipped":703,"failed":0}
I0330 07:41:23.657] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.658] ------------------------------
I0330 07:41:23.658] [sig-apps] ReplicationController 
I0330 07:41:23.658]   should adopt matching pods on creation [Conformance]
I0330 07:41:23.658]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.658] [BeforeEach] [sig-apps] ReplicationController
... skipping 27 lines ...
I0330 07:41:23.662] • [SLOW TEST:7.062 seconds]
I0330 07:41:23.662] [sig-apps] ReplicationController
I0330 07:41:23.662] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:23.663]   should adopt matching pods on creation [Conformance]
I0330 07:41:23.663]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.663] ------------------------------
I0330 07:41:23.663] {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":340,"completed":48,"skipped":734,"failed":0}
I0330 07:41:23.663] SSSSSSSSSSS
I0330 07:41:23.663] ------------------------------
I0330 07:41:23.663] [sig-apps] DisruptionController 
I0330 07:41:23.663]   should update/patch PodDisruptionBudget status [Conformance]
I0330 07:41:23.664]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.664] [BeforeEach] [sig-apps] DisruptionController
... skipping 21 lines ...
I0330 07:41:23.667] STEP: Patching PodDisruptionBudget status
I0330 07:41:23.668] STEP: Waiting for the pdb to be processed
I0330 07:41:23.668] [AfterEach] [sig-apps] DisruptionController
I0330 07:41:23.668]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.668] Mar 30 06:10:42.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.668] STEP: Destroying namespace "disruption-6909" for this suite.
I0330 07:41:23.668] •{"msg":"PASSED [sig-apps] DisruptionController should update/patch PodDisruptionBudget status [Conformance]","total":340,"completed":49,"skipped":745,"failed":0}
I0330 07:41:23.668] SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.668] ------------------------------
I0330 07:41:23.669] [sig-node] Docker Containers 
I0330 07:41:23.669]   should use the image defaults if command and args are blank [NodeConformance] [Conformance]
I0330 07:41:23.669]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.669] [BeforeEach] [sig-node] Docker Containers
... skipping 11 lines ...
I0330 07:41:23.671] [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
I0330 07:41:23.671]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.671] [AfterEach] [sig-node] Docker Containers
I0330 07:41:23.671]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.671] Mar 30 06:10:44.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.671] STEP: Destroying namespace "containers-7506" for this suite.
I0330 07:41:23.672] •{"msg":"PASSED [sig-node] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":340,"completed":50,"skipped":774,"failed":0}
I0330 07:41:23.672] SSSSSSSSS
I0330 07:41:23.672] ------------------------------
I0330 07:41:23.672] [sig-network] DNS 
I0330 07:41:23.672]   should provide DNS for services  [Conformance]
I0330 07:41:23.672]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.672] [BeforeEach] [sig-network] DNS
... skipping 24 lines ...
I0330 07:41:23.680] Mar 30 06:10:53.069: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.680] Mar 30 06:10:53.072: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.681] Mar 30 06:10:53.086: INFO: Unable to read jessie_udp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.681] Mar 30 06:10:53.089: INFO: Unable to read jessie_tcp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.681] Mar 30 06:10:53.090: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.682] Mar 30 06:10:53.092: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.682] Mar 30 06:10:53.105: INFO: Lookups using dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2 failed for: [wheezy_udp@dns-test-service.dns-204.svc.cluster.local wheezy_tcp@dns-test-service.dns-204.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local jessie_udp@dns-test-service.dns-204.svc.cluster.local jessie_tcp@dns-test-service.dns-204.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local]
I0330 07:41:23.683] 
I0330 07:41:23.683] Mar 30 06:10:58.109: INFO: Unable to read wheezy_udp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.683] Mar 30 06:10:58.112: INFO: Unable to read wheezy_tcp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.684] Mar 30 06:10:58.115: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.684] Mar 30 06:10:58.117: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.684] Mar 30 06:10:58.133: INFO: Unable to read jessie_udp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.684] Mar 30 06:10:58.136: INFO: Unable to read jessie_tcp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.685] Mar 30 06:10:58.139: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.685] Mar 30 06:10:58.141: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.685] Mar 30 06:10:58.154: INFO: Lookups using dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2 failed for: [wheezy_udp@dns-test-service.dns-204.svc.cluster.local wheezy_tcp@dns-test-service.dns-204.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local jessie_udp@dns-test-service.dns-204.svc.cluster.local jessie_tcp@dns-test-service.dns-204.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local]
I0330 07:41:23.685] 
I0330 07:41:23.686] Mar 30 06:11:03.109: INFO: Unable to read wheezy_udp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.686] Mar 30 06:11:03.112: INFO: Unable to read wheezy_tcp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.686] Mar 30 06:11:03.114: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.687] Mar 30 06:11:03.116: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.687] Mar 30 06:11:03.130: INFO: Unable to read jessie_udp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.687] Mar 30 06:11:03.132: INFO: Unable to read jessie_tcp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.687] Mar 30 06:11:03.134: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.688] Mar 30 06:11:03.136: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.688] Mar 30 06:11:03.148: INFO: Lookups using dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2 failed for: [wheezy_udp@dns-test-service.dns-204.svc.cluster.local wheezy_tcp@dns-test-service.dns-204.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local jessie_udp@dns-test-service.dns-204.svc.cluster.local jessie_tcp@dns-test-service.dns-204.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local]
I0330 07:41:23.688] 
I0330 07:41:23.688] Mar 30 06:11:08.110: INFO: Unable to read wheezy_udp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.689] Mar 30 06:11:08.112: INFO: Unable to read wheezy_tcp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.689] Mar 30 06:11:08.115: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.689] Mar 30 06:11:08.117: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.690] Mar 30 06:11:08.134: INFO: Unable to read jessie_udp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.690] Mar 30 06:11:08.136: INFO: Unable to read jessie_tcp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.690] Mar 30 06:11:08.138: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.690] Mar 30 06:11:08.141: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.691] Mar 30 06:11:08.176: INFO: Lookups using dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2 failed for: [wheezy_udp@dns-test-service.dns-204.svc.cluster.local wheezy_tcp@dns-test-service.dns-204.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local jessie_udp@dns-test-service.dns-204.svc.cluster.local jessie_tcp@dns-test-service.dns-204.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local]
I0330 07:41:23.691] 
I0330 07:41:23.691] Mar 30 06:11:13.110: INFO: Unable to read wheezy_udp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.692] Mar 30 06:11:13.113: INFO: Unable to read wheezy_tcp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.692] Mar 30 06:11:13.116: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.692] Mar 30 06:11:13.119: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.692] Mar 30 06:11:13.135: INFO: Unable to read jessie_udp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.693] Mar 30 06:11:13.137: INFO: Unable to read jessie_tcp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.693] Mar 30 06:11:13.139: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.693] Mar 30 06:11:13.141: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.694] Mar 30 06:11:13.154: INFO: Lookups using dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2 failed for: [wheezy_udp@dns-test-service.dns-204.svc.cluster.local wheezy_tcp@dns-test-service.dns-204.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local jessie_udp@dns-test-service.dns-204.svc.cluster.local jessie_tcp@dns-test-service.dns-204.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-204.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-204.svc.cluster.local]
I0330 07:41:23.694] 
I0330 07:41:23.694] Mar 30 06:11:18.110: INFO: Unable to read wheezy_udp@dns-test-service.dns-204.svc.cluster.local from pod dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2: the server could not find the requested resource (get pods dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2)
I0330 07:41:23.694] Mar 30 06:11:18.154: INFO: Lookups using dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2 failed for: [wheezy_udp@dns-test-service.dns-204.svc.cluster.local]
I0330 07:41:23.694] 
I0330 07:41:23.694] Mar 30 06:11:23.150: INFO: DNS probes using dns-204/dns-test-a4ecc033-749a-4d73-9aa5-d8cb89e8aed2 succeeded
I0330 07:41:23.694] 
I0330 07:41:23.694] STEP: deleting the pod
I0330 07:41:23.694] STEP: deleting the test service
I0330 07:41:23.695] STEP: deleting the test headless service
... skipping 5 lines ...
I0330 07:41:23.695] • [SLOW TEST:38.232 seconds]
I0330 07:41:23.695] [sig-network] DNS
I0330 07:41:23.695] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:23.696]   should provide DNS for services  [Conformance]
I0330 07:41:23.696]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.696] ------------------------------
I0330 07:41:23.696] {"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":340,"completed":51,"skipped":783,"failed":0}
I0330 07:41:23.696] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.696] ------------------------------
I0330 07:41:23.696] [sig-network] EndpointSlice 
I0330 07:41:23.696]   should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
I0330 07:41:23.696]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.697] [BeforeEach] [sig-network] EndpointSlice
... skipping 25 lines ...
I0330 07:41:23.700] • [SLOW TEST:30.169 seconds]
I0330 07:41:23.700] [sig-network] EndpointSlice
I0330 07:41:23.700] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:23.700]   should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]
I0330 07:41:23.701]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.701] ------------------------------
I0330 07:41:23.701] {"msg":"PASSED [sig-network] EndpointSlice should create Endpoints and EndpointSlices for Pods matching a Service [Conformance]","total":340,"completed":52,"skipped":816,"failed":0}
I0330 07:41:23.701] SSSSSSSS
I0330 07:41:23.701] ------------------------------
I0330 07:41:23.701] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] 
I0330 07:41:23.701]   should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]
I0330 07:41:23.701]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.702] [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 16 lines ...
I0330 07:41:23.704]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.704] STEP: Creating a pod with one valid and two invalid sysctls
I0330 07:41:23.704] [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
I0330 07:41:23.704]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.704] Mar 30 06:11:53.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.704] STEP: Destroying namespace "sysctl-5508" for this suite.
I0330 07:41:23.705] •{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":340,"completed":53,"skipped":824,"failed":0}
I0330 07:41:23.705] SS
I0330 07:41:23.705] ------------------------------
I0330 07:41:23.705] [sig-node] Security Context When creating a pod with readOnlyRootFilesystem 
I0330 07:41:23.705]   should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
I0330 07:41:23.705]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.706] [BeforeEach] [sig-node] Security Context
... skipping 9 lines ...
I0330 07:41:23.707] I0330 06:11:53.454069      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.707] I0330 06:11:53.456422      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.707] [BeforeEach] [sig-node] Security Context
I0330 07:41:23.707]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
I0330 07:41:23.708] [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
I0330 07:41:23.708]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.708] Mar 30 06:11:53.461: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-84004cdb-f469-45e3-ad57-1c1fe19500ff" in namespace "security-context-test-4756" to be "Succeeded or Failed"
I0330 07:41:23.708] Mar 30 06:11:53.463: INFO: Pod "busybox-readonly-false-84004cdb-f469-45e3-ad57-1c1fe19500ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.317179ms
I0330 07:41:23.708] Mar 30 06:11:55.468: INFO: Pod "busybox-readonly-false-84004cdb-f469-45e3-ad57-1c1fe19500ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007114525s
I0330 07:41:23.708] Mar 30 06:11:55.468: INFO: Pod "busybox-readonly-false-84004cdb-f469-45e3-ad57-1c1fe19500ff" satisfied condition "Succeeded or Failed"
I0330 07:41:23.708] [AfterEach] [sig-node] Security Context
I0330 07:41:23.709]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.709] Mar 30 06:11:55.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.709] STEP: Destroying namespace "security-context-test-4756" for this suite.
I0330 07:41:23.709] •{"msg":"PASSED [sig-node] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":340,"completed":54,"skipped":826,"failed":0}
I0330 07:41:23.709] SSSSS
I0330 07:41:23.709] ------------------------------
I0330 07:41:23.709] [sig-node] Variable Expansion 
I0330 07:41:23.709]   should allow composing env vars into new env vars [NodeConformance] [Conformance]
I0330 07:41:23.710]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.710] [BeforeEach] [sig-node] Variable Expansion
... skipping 8 lines ...
I0330 07:41:23.711] I0330 06:11:55.502606      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.711] I0330 06:11:55.502629      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.711] [It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
I0330 07:41:23.711]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.712] I0330 06:11:55.505084      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.712] STEP: Creating a pod to test env composition
I0330 07:41:23.712] Mar 30 06:11:55.510: INFO: Waiting up to 5m0s for pod "var-expansion-d1a1ea47-6a48-4827-9c0d-35ebb2f7b84a" in namespace "var-expansion-2925" to be "Succeeded or Failed"
I0330 07:41:23.712] Mar 30 06:11:55.513: INFO: Pod "var-expansion-d1a1ea47-6a48-4827-9c0d-35ebb2f7b84a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.090971ms
I0330 07:41:23.713] Mar 30 06:11:57.521: INFO: Pod "var-expansion-d1a1ea47-6a48-4827-9c0d-35ebb2f7b84a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010989714s
I0330 07:41:23.713] STEP: Saw pod success
I0330 07:41:23.713] Mar 30 06:11:57.521: INFO: Pod "var-expansion-d1a1ea47-6a48-4827-9c0d-35ebb2f7b84a" satisfied condition "Succeeded or Failed"
I0330 07:41:23.713] Mar 30 06:11:57.524: INFO: Trying to get logs from node kind-worker2 pod var-expansion-d1a1ea47-6a48-4827-9c0d-35ebb2f7b84a container dapi-container: <nil>
I0330 07:41:23.714] STEP: delete the pod
I0330 07:41:23.714] Mar 30 06:11:57.534: INFO: Waiting for pod var-expansion-d1a1ea47-6a48-4827-9c0d-35ebb2f7b84a to disappear
I0330 07:41:23.714] Mar 30 06:11:57.538: INFO: Pod var-expansion-d1a1ea47-6a48-4827-9c0d-35ebb2f7b84a no longer exists
I0330 07:41:23.714] [AfterEach] [sig-node] Variable Expansion
I0330 07:41:23.714]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.714] Mar 30 06:11:57.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.715] STEP: Destroying namespace "var-expansion-2925" for this suite.
I0330 07:41:23.715] •{"msg":"PASSED [sig-node] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":340,"completed":55,"skipped":831,"failed":0}
I0330 07:41:23.715] 
I0330 07:41:23.715] ------------------------------
I0330 07:41:23.715] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] 
I0330 07:41:23.715]   should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
I0330 07:41:23.715]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.716] [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 12 lines ...
I0330 07:41:23.718] [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
I0330 07:41:23.718]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
I0330 07:41:23.718] [It] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]
I0330 07:41:23.718]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.719] I0330 06:11:57.568055      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.719] STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
I0330 07:41:23.719] STEP: Watching for error events or started pod
I0330 07:41:23.719] STEP: Waiting for pod completion
I0330 07:41:23.719] STEP: Checking that the pod succeeded
I0330 07:41:23.719] STEP: Getting logs from the pod
I0330 07:41:23.720] STEP: Checking that the sysctl is actually updated
I0330 07:41:23.720] [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
I0330 07:41:23.720]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.720] Mar 30 06:11:59.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.720] STEP: Destroying namespace "sysctl-2545" for this suite.
I0330 07:41:23.721] •{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually allowed [MinimumKubeletVersion:1.21] [Conformance]","total":340,"completed":56,"skipped":831,"failed":0}
I0330 07:41:23.721] 
I0330 07:41:23.721] ------------------------------
I0330 07:41:23.721] [sig-storage] Projected downwardAPI 
I0330 07:41:23.721]   should update labels on modification [NodeConformance] [Conformance]
I0330 07:41:23.721]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.721] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 24 lines ...
I0330 07:41:23.725] • [SLOW TEST:6.592 seconds]
I0330 07:41:23.726] [sig-storage] Projected downwardAPI
I0330 07:41:23.726] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
I0330 07:41:23.726]   should update labels on modification [NodeConformance] [Conformance]
I0330 07:41:23.726]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.726] ------------------------------
I0330 07:41:23.727] {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":340,"completed":57,"skipped":831,"failed":0}
I0330 07:41:23.727] SSSSSSS
I0330 07:41:23.727] ------------------------------
I0330 07:41:23.727] [sig-storage] EmptyDir volumes 
I0330 07:41:23.727]   should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.727]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.728] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:23.730] I0330 06:12:06.207961      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.730] I0330 06:12:06.207971      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.731] [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.731]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.731] STEP: Creating a pod to test emptydir 0666 on tmpfs
I0330 07:41:23.732] I0330 06:12:06.210171      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.732] Mar 30 06:12:06.214: INFO: Waiting up to 5m0s for pod "pod-7ad656d2-55d4-4eba-9fb8-54cd892b924f" in namespace "emptydir-2088" to be "Succeeded or Failed"
I0330 07:41:23.732] Mar 30 06:12:06.216: INFO: Pod "pod-7ad656d2-55d4-4eba-9fb8-54cd892b924f": Phase="Pending", Reason="", readiness=false. Elapsed: 1.836282ms
I0330 07:41:23.733] Mar 30 06:12:08.222: INFO: Pod "pod-7ad656d2-55d4-4eba-9fb8-54cd892b924f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007184876s
I0330 07:41:23.733] STEP: Saw pod success
I0330 07:41:23.733] Mar 30 06:12:08.222: INFO: Pod "pod-7ad656d2-55d4-4eba-9fb8-54cd892b924f" satisfied condition "Succeeded or Failed"
I0330 07:41:23.733] Mar 30 06:12:08.224: INFO: Trying to get logs from node kind-worker pod pod-7ad656d2-55d4-4eba-9fb8-54cd892b924f container test-container: <nil>
I0330 07:41:23.733] STEP: delete the pod
I0330 07:41:23.734] Mar 30 06:12:08.249: INFO: Waiting for pod pod-7ad656d2-55d4-4eba-9fb8-54cd892b924f to disappear
I0330 07:41:23.734] Mar 30 06:12:08.251: INFO: Pod pod-7ad656d2-55d4-4eba-9fb8-54cd892b924f no longer exists
I0330 07:41:23.734] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:23.734]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.735] Mar 30 06:12:08.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.735] STEP: Destroying namespace "emptydir-2088" for this suite.
I0330 07:41:23.735] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":58,"skipped":838,"failed":0}
I0330 07:41:23.735] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.736] ------------------------------
I0330 07:41:23.736] [sig-network] Services 
I0330 07:41:23.736]   should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
I0330 07:41:23.736]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.736] [BeforeEach] [sig-network] Services
... skipping 86 lines ...
I0330 07:41:23.759] • [SLOW TEST:45.281 seconds]
I0330 07:41:23.759] [sig-network] Services
I0330 07:41:23.759] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:23.759]   should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]
I0330 07:41:23.760]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.760] ------------------------------
I0330 07:41:23.760] {"msg":"PASSED [sig-network] Services should have session affinity timeout work for service with type clusterIP [LinuxOnly] [Conformance]","total":340,"completed":59,"skipped":883,"failed":0}
I0330 07:41:23.760] SS
I0330 07:41:23.761] ------------------------------
I0330 07:41:23.761] [sig-network] Services 
I0330 07:41:23.761]   should be able to change the type from ExternalName to NodePort [Conformance]
I0330 07:41:23.761]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.762] [BeforeEach] [sig-network] Services
... skipping 56 lines ...
I0330 07:41:23.774] • [SLOW TEST:8.985 seconds]
I0330 07:41:23.774] [sig-network] Services
I0330 07:41:23.774] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:23.774]   should be able to change the type from ExternalName to NodePort [Conformance]
I0330 07:41:23.775]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.775] ------------------------------
I0330 07:41:23.775] {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":340,"completed":60,"skipped":885,"failed":0}
I0330 07:41:23.775] SSSSSSSSSSSSSSSSSS
I0330 07:41:23.775] ------------------------------
I0330 07:41:23.776] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0330 07:41:23.776]   works for CRD preserving unknown fields at the schema root [Conformance]
I0330 07:41:23.776]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.776] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 36 lines ...
I0330 07:41:23.784] • [SLOW TEST:8.075 seconds]
I0330 07:41:23.784] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0330 07:41:23.784] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:23.784]   works for CRD preserving unknown fields at the schema root [Conformance]
I0330 07:41:23.785]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.785] ------------------------------
I0330 07:41:23.785] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":340,"completed":61,"skipped":903,"failed":0}
I0330 07:41:23.785] SSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.785] ------------------------------
I0330 07:41:23.785] [sig-api-machinery] ResourceQuota 
I0330 07:41:23.786]   should create a ResourceQuota and capture the life of a replica set. [Conformance]
I0330 07:41:23.786]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.786] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 25 lines ...
I0330 07:41:23.790] • [SLOW TEST:11.072 seconds]
I0330 07:41:23.790] [sig-api-machinery] ResourceQuota
I0330 07:41:23.791] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:23.791]   should create a ResourceQuota and capture the life of a replica set. [Conformance]
I0330 07:41:23.791]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.791] ------------------------------
I0330 07:41:23.791] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":340,"completed":62,"skipped":927,"failed":0}
I0330 07:41:23.791] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.791] ------------------------------
I0330 07:41:23.791] [sig-scheduling] SchedulerPreemption [Serial] 
I0330 07:41:23.791]   validates basic preemption works [Conformance]
I0330 07:41:23.792]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.792] [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 29 lines ...
I0330 07:41:23.797] • [SLOW TEST:74.147 seconds]
I0330 07:41:23.797] [sig-scheduling] SchedulerPreemption [Serial]
I0330 07:41:23.797] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
I0330 07:41:23.797]   validates basic preemption works [Conformance]
I0330 07:41:23.797]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.797] ------------------------------
I0330 07:41:23.797] {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works [Conformance]","total":340,"completed":63,"skipped":957,"failed":0}
I0330 07:41:23.798] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.798] ------------------------------
I0330 07:41:23.798] [sig-api-machinery] Servers with support for Table transformation 
I0330 07:41:23.798]   should return a 406 for a backend which does not implement metadata [Conformance]
I0330 07:41:23.798]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.798] [BeforeEach] [sig-api-machinery] Servers with support for Table transformation
... skipping 13 lines ...
I0330 07:41:23.801] [It] should return a 406 for a backend which does not implement metadata [Conformance]
I0330 07:41:23.801]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.801] [AfterEach] [sig-api-machinery] Servers with support for Table transformation
I0330 07:41:23.801]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.801] Mar 30 06:14:35.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.801] STEP: Destroying namespace "tables-5113" for this suite.
I0330 07:41:23.802] •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":340,"completed":64,"skipped":1025,"failed":0}
I0330 07:41:23.802] SSSSSSSSSSSSSSSSS
I0330 07:41:23.802] ------------------------------
I0330 07:41:23.802] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
I0330 07:41:23.802]   Should recreate evicted statefulset [Conformance]
I0330 07:41:23.802]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.802] [BeforeEach] [sig-apps] StatefulSet
... skipping 19 lines ...
I0330 07:41:23.805] STEP: Creating pod with conflicting port in namespace statefulset-9243
I0330 07:41:23.805] STEP: Creating statefulset with conflicting port in namespace statefulset-9243
I0330 07:41:23.806] STEP: Waiting until pod test-pod will start running in namespace statefulset-9243
I0330 07:41:23.806] STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-9243
I0330 07:41:23.806] Mar 30 06:14:39.911: INFO: Observed stateful pod in namespace: statefulset-9243, name: ss-0, uid: 646e9827-dfd9-4275-a4b5-5a7dcb454f50, status phase: Pending. Waiting for statefulset controller to delete.
I0330 07:41:23.806] I0330 06:14:39.911725      19 retrywatcher.go:247] Starting RetryWatcher.
I0330 07:41:23.806] Mar 30 06:14:40.301: INFO: Observed stateful pod in namespace: statefulset-9243, name: ss-0, uid: 646e9827-dfd9-4275-a4b5-5a7dcb454f50, status phase: Failed. Waiting for statefulset controller to delete.
I0330 07:41:23.807] Mar 30 06:14:40.308: INFO: Observed stateful pod in namespace: statefulset-9243, name: ss-0, uid: 646e9827-dfd9-4275-a4b5-5a7dcb454f50, status phase: Failed. Waiting for statefulset controller to delete.
I0330 07:41:23.807] Mar 30 06:14:40.310: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-9243
I0330 07:41:23.807] STEP: Removing pod with conflicting port in namespace statefulset-9243
I0330 07:41:23.807] I0330 06:14:40.310388      19 retrywatcher.go:147] "Stopping RetryWatcher."
I0330 07:41:23.807] I0330 06:14:40.310463      19 retrywatcher.go:275] Stopping RetryWatcher.
I0330 07:41:23.807] STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-9243 and will be in running state
I0330 07:41:23.808] [AfterEach] Basic StatefulSet functionality [StatefulSetBasic]
... skipping 12 lines ...
I0330 07:41:23.809] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:23.809]   Basic StatefulSet functionality [StatefulSetBasic]
I0330 07:41:23.809]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
I0330 07:41:23.809]     Should recreate evicted statefulset [Conformance]
I0330 07:41:23.810]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.810] ------------------------------
I0330 07:41:23.810] {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":340,"completed":65,"skipped":1042,"failed":0}
I0330 07:41:23.810] SSSSSSS
I0330 07:41:23.810] ------------------------------
I0330 07:41:23.810] [sig-storage] Projected downwardAPI 
I0330 07:41:23.810]   should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.810]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.810] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 10 lines ...
I0330 07:41:23.812] [BeforeEach] [sig-storage] Projected downwardAPI
I0330 07:41:23.812]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
I0330 07:41:23.812] I0330 06:14:54.406008      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.813] [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.813]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.813] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:23.813] Mar 30 06:14:54.416: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dbf1be33-1a96-473a-8483-4bacfc07b160" in namespace "projected-349" to be "Succeeded or Failed"
I0330 07:41:23.813] Mar 30 06:14:54.419: INFO: Pod "downwardapi-volume-dbf1be33-1a96-473a-8483-4bacfc07b160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.408651ms
I0330 07:41:23.813] Mar 30 06:14:56.423: INFO: Pod "downwardapi-volume-dbf1be33-1a96-473a-8483-4bacfc07b160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007277902s
I0330 07:41:23.813] STEP: Saw pod success
I0330 07:41:23.814] Mar 30 06:14:56.424: INFO: Pod "downwardapi-volume-dbf1be33-1a96-473a-8483-4bacfc07b160" satisfied condition "Succeeded or Failed"
I0330 07:41:23.814] Mar 30 06:14:56.426: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-dbf1be33-1a96-473a-8483-4bacfc07b160 container client-container: <nil>
I0330 07:41:23.814] STEP: delete the pod
I0330 07:41:23.814] Mar 30 06:14:56.444: INFO: Waiting for pod downwardapi-volume-dbf1be33-1a96-473a-8483-4bacfc07b160 to disappear
I0330 07:41:23.814] Mar 30 06:14:56.446: INFO: Pod downwardapi-volume-dbf1be33-1a96-473a-8483-4bacfc07b160 no longer exists
I0330 07:41:23.814] [AfterEach] [sig-storage] Projected downwardAPI
I0330 07:41:23.814]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.814] Mar 30 06:14:56.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.815] STEP: Destroying namespace "projected-349" for this suite.
I0330 07:41:23.815] •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":66,"skipped":1049,"failed":0}
I0330 07:41:23.815] SSSSSSSSSSSSSS
I0330 07:41:23.815] ------------------------------
I0330 07:41:23.815] [sig-apps] Daemon set [Serial] 
I0330 07:41:23.815]   should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
I0330 07:41:23.815]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.816] [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 87 lines ...
I0330 07:41:23.831] • [SLOW TEST:26.798 seconds]
I0330 07:41:23.831] [sig-apps] Daemon set [Serial]
I0330 07:41:23.831] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:23.831]   should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
I0330 07:41:23.831]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.832] ------------------------------
I0330 07:41:23.832] {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":340,"completed":67,"skipped":1063,"failed":0}
I0330 07:41:23.832] SSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.832] ------------------------------
I0330 07:41:23.832] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
I0330 07:41:23.832]   Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
I0330 07:41:23.832]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.833] [BeforeEach] [sig-apps] StatefulSet
... skipping 121 lines ...
I0330 07:41:23.852] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:23.852]   Basic StatefulSet functionality [StatefulSetBasic]
I0330 07:41:23.852]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
I0330 07:41:23.852]     Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
I0330 07:41:23.852]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.852] ------------------------------
I0330 07:41:23.853] {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":340,"completed":68,"skipped":1087,"failed":0}
I0330 07:41:23.853] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.853] ------------------------------
I0330 07:41:23.853] [sig-node] Kubelet when scheduling a busybox Pod with hostAliases 
I0330 07:41:23.853]   should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.853]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.853] [BeforeEach] [sig-node] Kubelet
... skipping 15 lines ...
I0330 07:41:23.856] Mar 30 06:16:54.882: INFO: The status of Pod busybox-host-aliases0a9cf03a-3823-4325-8a42-fa150b834441 is Pending, waiting for it to be Running (with Ready = true)
I0330 07:41:23.856] Mar 30 06:16:56.886: INFO: The status of Pod busybox-host-aliases0a9cf03a-3823-4325-8a42-fa150b834441 is Running (Ready = true)
I0330 07:41:23.856] [AfterEach] [sig-node] Kubelet
I0330 07:41:23.856]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.856] Mar 30 06:16:56.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.857] STEP: Destroying namespace "kubelet-test-5020" for this suite.
I0330 07:41:23.857] •{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":69,"skipped":1129,"failed":0}
I0330 07:41:23.857] SSSSSSSSSSSS
I0330 07:41:23.857] ------------------------------
I0330 07:41:23.857] [sig-cli] Kubectl client Update Demo 
I0330 07:41:23.857]   should create and stop a replication controller  [Conformance]
I0330 07:41:23.857]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.857] [BeforeEach] [sig-cli] Kubectl client
... skipping 75 lines ...
I0330 07:41:23.868] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0330 07:41:23.868]   Update Demo
I0330 07:41:23.868]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291
I0330 07:41:23.868]     should create and stop a replication controller  [Conformance]
I0330 07:41:23.868]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.868] ------------------------------
I0330 07:41:23.869] {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":340,"completed":70,"skipped":1141,"failed":0}
I0330 07:41:23.869] SSSSSS
I0330 07:41:23.869] ------------------------------
I0330 07:41:23.869] [sig-node] Probing container 
I0330 07:41:23.869]   with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
I0330 07:41:23.869]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.869] [BeforeEach] [sig-node] Probing container
... skipping 20 lines ...
I0330 07:41:23.872] • [SLOW TEST:60.052 seconds]
I0330 07:41:23.872] [sig-node] Probing container
I0330 07:41:23.872] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:23.873]   with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
I0330 07:41:23.873]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.873] ------------------------------
I0330 07:41:23.873] {"msg":"PASSED [sig-node] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":340,"completed":71,"skipped":1147,"failed":0}
I0330 07:41:23.873] SSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.873] ------------------------------
I0330 07:41:23.873] [sig-network] EndpointSliceMirroring 
I0330 07:41:23.873]   should mirror a custom Endpoints resource through create update and delete [Conformance]
I0330 07:41:23.874]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.874] [BeforeEach] [sig-network] EndpointSliceMirroring
... skipping 26 lines ...
I0330 07:41:23.879] • [SLOW TEST:6.089 seconds]
I0330 07:41:23.879] [sig-network] EndpointSliceMirroring
I0330 07:41:23.880] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:23.880]   should mirror a custom Endpoints resource through create update and delete [Conformance]
I0330 07:41:23.881]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.881] ------------------------------
I0330 07:41:23.881] {"msg":"PASSED [sig-network] EndpointSliceMirroring should mirror a custom Endpoints resource through create update and delete [Conformance]","total":340,"completed":72,"skipped":1167,"failed":0}
I0330 07:41:23.881] SSSSSSSSSSS
I0330 07:41:23.881] ------------------------------
I0330 07:41:23.881] [sig-network] DNS 
I0330 07:41:23.882]   should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
I0330 07:41:23.882]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.882] [BeforeEach] [sig-network] DNS
... skipping 32 lines ...
I0330 07:41:23.895] Mar 30 06:18:11.519: INFO: Unable to read jessie_udp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.896] Mar 30 06:18:11.525: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.896] Mar 30 06:18:11.528: INFO: Unable to read jessie_udp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.896] Mar 30 06:18:11.530: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.897] Mar 30 06:18:11.533: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.897] Mar 30 06:18:11.535: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.898] Mar 30 06:18:11.549: INFO: Lookups using dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3492 wheezy_tcp@dns-test-service.dns-3492 wheezy_udp@dns-test-service.dns-3492.svc wheezy_tcp@dns-test-service.dns-3492.svc wheezy_udp@_http._tcp.dns-test-service.dns-3492.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3492.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3492 jessie_tcp@dns-test-service.dns-3492 jessie_udp@dns-test-service.dns-3492.svc jessie_tcp@dns-test-service.dns-3492.svc jessie_udp@_http._tcp.dns-test-service.dns-3492.svc jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc]
I0330 07:41:23.898] 
I0330 07:41:23.898] Mar 30 06:18:16.554: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.899] Mar 30 06:18:16.558: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.899] Mar 30 06:18:16.561: INFO: Unable to read wheezy_udp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.900] Mar 30 06:18:16.564: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.900] Mar 30 06:18:16.567: INFO: Unable to read wheezy_udp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
... skipping 5 lines ...
I0330 07:41:23.902] Mar 30 06:18:16.603: INFO: Unable to read jessie_udp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.903] Mar 30 06:18:16.605: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.903] Mar 30 06:18:16.607: INFO: Unable to read jessie_udp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.903] Mar 30 06:18:16.610: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.904] Mar 30 06:18:16.613: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.904] Mar 30 06:18:16.616: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.905] Mar 30 06:18:16.631: INFO: Lookups using dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3492 wheezy_tcp@dns-test-service.dns-3492 wheezy_udp@dns-test-service.dns-3492.svc wheezy_tcp@dns-test-service.dns-3492.svc wheezy_udp@_http._tcp.dns-test-service.dns-3492.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3492.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3492 jessie_tcp@dns-test-service.dns-3492 jessie_udp@dns-test-service.dns-3492.svc jessie_tcp@dns-test-service.dns-3492.svc jessie_udp@_http._tcp.dns-test-service.dns-3492.svc jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc]
I0330 07:41:23.906] 
I0330 07:41:23.906] Mar 30 06:18:21.555: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.906] Mar 30 06:18:21.558: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.907] Mar 30 06:18:21.561: INFO: Unable to read wheezy_udp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.907] Mar 30 06:18:21.564: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.908] Mar 30 06:18:21.567: INFO: Unable to read wheezy_udp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
... skipping 5 lines ...
I0330 07:41:23.911] Mar 30 06:18:21.598: INFO: Unable to read jessie_udp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.911] Mar 30 06:18:21.600: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.912] Mar 30 06:18:21.602: INFO: Unable to read jessie_udp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.912] Mar 30 06:18:21.605: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.913] Mar 30 06:18:21.607: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.914] Mar 30 06:18:21.610: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.915] Mar 30 06:18:21.624: INFO: Lookups using dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3492 wheezy_tcp@dns-test-service.dns-3492 wheezy_udp@dns-test-service.dns-3492.svc wheezy_tcp@dns-test-service.dns-3492.svc wheezy_udp@_http._tcp.dns-test-service.dns-3492.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3492.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3492 jessie_tcp@dns-test-service.dns-3492 jessie_udp@dns-test-service.dns-3492.svc jessie_tcp@dns-test-service.dns-3492.svc jessie_udp@_http._tcp.dns-test-service.dns-3492.svc jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc]
I0330 07:41:23.915] 
I0330 07:41:23.915] Mar 30 06:18:26.554: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.916] Mar 30 06:18:26.557: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.916] Mar 30 06:18:26.562: INFO: Unable to read wheezy_udp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.916] Mar 30 06:18:26.565: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.917] Mar 30 06:18:26.567: INFO: Unable to read wheezy_udp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
... skipping 5 lines ...
I0330 07:41:23.918] Mar 30 06:18:26.597: INFO: Unable to read jessie_udp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.919] Mar 30 06:18:26.599: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.920] Mar 30 06:18:26.601: INFO: Unable to read jessie_udp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.920] Mar 30 06:18:26.603: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.921] Mar 30 06:18:26.605: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.921] Mar 30 06:18:26.608: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.922] Mar 30 06:18:26.621: INFO: Lookups using dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3492 wheezy_tcp@dns-test-service.dns-3492 wheezy_udp@dns-test-service.dns-3492.svc wheezy_tcp@dns-test-service.dns-3492.svc wheezy_udp@_http._tcp.dns-test-service.dns-3492.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3492.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3492 jessie_tcp@dns-test-service.dns-3492 jessie_udp@dns-test-service.dns-3492.svc jessie_tcp@dns-test-service.dns-3492.svc jessie_udp@_http._tcp.dns-test-service.dns-3492.svc jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc]
I0330 07:41:23.923] 
I0330 07:41:23.923] Mar 30 06:18:31.554: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.924] Mar 30 06:18:31.557: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.924] Mar 30 06:18:31.560: INFO: Unable to read wheezy_udp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.925] Mar 30 06:18:31.562: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.925] Mar 30 06:18:31.564: INFO: Unable to read wheezy_udp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
... skipping 5 lines ...
I0330 07:41:23.927] Mar 30 06:18:31.595: INFO: Unable to read jessie_udp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.928] Mar 30 06:18:31.597: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.928] Mar 30 06:18:31.600: INFO: Unable to read jessie_udp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.928] Mar 30 06:18:31.602: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.929] Mar 30 06:18:31.604: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.929] Mar 30 06:18:31.607: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.930] Mar 30 06:18:31.619: INFO: Lookups using dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3492 wheezy_tcp@dns-test-service.dns-3492 wheezy_udp@dns-test-service.dns-3492.svc wheezy_tcp@dns-test-service.dns-3492.svc wheezy_udp@_http._tcp.dns-test-service.dns-3492.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3492.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3492 jessie_tcp@dns-test-service.dns-3492 jessie_udp@dns-test-service.dns-3492.svc jessie_tcp@dns-test-service.dns-3492.svc jessie_udp@_http._tcp.dns-test-service.dns-3492.svc jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc]
I0330 07:41:23.930] 
I0330 07:41:23.930] Mar 30 06:18:36.553: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.930] Mar 30 06:18:36.556: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.931] Mar 30 06:18:36.559: INFO: Unable to read wheezy_udp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.931] Mar 30 06:18:36.562: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.931] Mar 30 06:18:36.565: INFO: Unable to read wheezy_udp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
... skipping 5 lines ...
I0330 07:41:23.933] Mar 30 06:18:36.592: INFO: Unable to read jessie_udp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.933] Mar 30 06:18:36.594: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492 from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.934] Mar 30 06:18:36.599: INFO: Unable to read jessie_udp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.934] Mar 30 06:18:36.602: INFO: Unable to read jessie_tcp@dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.934] Mar 30 06:18:36.604: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.934] Mar 30 06:18:36.607: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.935] Mar 30 06:18:36.625: INFO: Lookups using dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-3492 wheezy_tcp@dns-test-service.dns-3492 wheezy_udp@dns-test-service.dns-3492.svc wheezy_tcp@dns-test-service.dns-3492.svc wheezy_udp@_http._tcp.dns-test-service.dns-3492.svc wheezy_tcp@_http._tcp.dns-test-service.dns-3492.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-3492 jessie_tcp@dns-test-service.dns-3492 jessie_udp@dns-test-service.dns-3492.svc jessie_tcp@dns-test-service.dns-3492.svc jessie_udp@_http._tcp.dns-test-service.dns-3492.svc jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc]
I0330 07:41:23.935] 
I0330 07:41:23.936] Mar 30 06:18:41.576: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.936] Mar 30 06:18:41.616: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.936] Mar 30 06:18:41.619: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc from pod dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd: the server could not find the requested resource (get pods dns-test-b090ab74-734d-4fee-af1b-694719bd32bd)
I0330 07:41:23.936] Mar 30 06:18:41.638: INFO: Lookups using dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd failed for: [wheezy_tcp@_http._tcp.dns-test-service.dns-3492.svc jessie_udp@_http._tcp.dns-test-service.dns-3492.svc jessie_tcp@_http._tcp.dns-test-service.dns-3492.svc]
I0330 07:41:23.937] 
I0330 07:41:23.937] Mar 30 06:18:46.622: INFO: DNS probes using dns-3492/dns-test-b090ab74-734d-4fee-af1b-694719bd32bd succeeded
I0330 07:41:23.937] 
I0330 07:41:23.937] STEP: deleting the pod
I0330 07:41:23.937] STEP: deleting the test service
I0330 07:41:23.937] STEP: deleting the test headless service
... skipping 5 lines ...
I0330 07:41:23.938] • [SLOW TEST:37.310 seconds]
I0330 07:41:23.938] [sig-network] DNS
I0330 07:41:23.938] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:23.938]   should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
I0330 07:41:23.938]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.938] ------------------------------
I0330 07:41:23.938] {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":340,"completed":73,"skipped":1178,"failed":0}
I0330 07:41:23.939] SSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.939] ------------------------------
I0330 07:41:23.939] [sig-apps] CronJob 
I0330 07:41:23.939]   should support CronJob API operations [Conformance]
I0330 07:41:23.939]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.939] [BeforeEach] [sig-apps] CronJob
... skipping 32 lines ...
I0330 07:41:23.944] STEP: deleting
I0330 07:41:23.944] STEP: deleting a collection
I0330 07:41:23.945] [AfterEach] [sig-apps] CronJob
I0330 07:41:23.945]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.945] Mar 30 06:18:46.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.945] STEP: Destroying namespace "cronjob-1801" for this suite.
I0330 07:41:23.945] •{"msg":"PASSED [sig-apps] CronJob should support CronJob API operations [Conformance]","total":340,"completed":74,"skipped":1202,"failed":0}
I0330 07:41:23.945] SSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.945] ------------------------------
I0330 07:41:23.945] [sig-apps] CronJob 
I0330 07:41:23.946]   should schedule multiple jobs concurrently [Conformance]
I0330 07:41:23.946]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.946] [BeforeEach] [sig-apps] CronJob
... skipping 25 lines ...
I0330 07:41:23.949] • [SLOW TEST:74.060 seconds]
I0330 07:41:23.949] [sig-apps] CronJob
I0330 07:41:23.949] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:23.950]   should schedule multiple jobs concurrently [Conformance]
I0330 07:41:23.950]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.950] ------------------------------
I0330 07:41:23.950] {"msg":"PASSED [sig-apps] CronJob should schedule multiple jobs concurrently [Conformance]","total":340,"completed":75,"skipped":1227,"failed":0}
I0330 07:41:23.950] SSSSSSSSSS
I0330 07:41:23.950] ------------------------------
I0330 07:41:23.950] [sig-storage] Projected downwardAPI 
I0330 07:41:23.950]   should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
I0330 07:41:23.951]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.951] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 10 lines ...
I0330 07:41:23.952] I0330 06:20:00.933007      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.952] [BeforeEach] [sig-storage] Projected downwardAPI
I0330 07:41:23.952]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
I0330 07:41:23.953] [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
I0330 07:41:23.953]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.953] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:23.953] Mar 30 06:20:00.939: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d12fe76e-6a8f-491f-82d2-7abbbb6d2d74" in namespace "projected-4757" to be "Succeeded or Failed"
I0330 07:41:23.953] Mar 30 06:20:00.942: INFO: Pod "downwardapi-volume-d12fe76e-6a8f-491f-82d2-7abbbb6d2d74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.641018ms
I0330 07:41:23.953] Mar 30 06:20:02.948: INFO: Pod "downwardapi-volume-d12fe76e-6a8f-491f-82d2-7abbbb6d2d74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008993617s
I0330 07:41:23.954] STEP: Saw pod success
I0330 07:41:23.954] Mar 30 06:20:02.948: INFO: Pod "downwardapi-volume-d12fe76e-6a8f-491f-82d2-7abbbb6d2d74" satisfied condition "Succeeded or Failed"
I0330 07:41:23.954] Mar 30 06:20:02.950: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-d12fe76e-6a8f-491f-82d2-7abbbb6d2d74 container client-container: <nil>
I0330 07:41:23.954] STEP: delete the pod
I0330 07:41:23.954] Mar 30 06:20:02.973: INFO: Waiting for pod downwardapi-volume-d12fe76e-6a8f-491f-82d2-7abbbb6d2d74 to disappear
I0330 07:41:23.954] Mar 30 06:20:02.975: INFO: Pod downwardapi-volume-d12fe76e-6a8f-491f-82d2-7abbbb6d2d74 no longer exists
I0330 07:41:23.954] [AfterEach] [sig-storage] Projected downwardAPI
I0330 07:41:23.955]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.955] Mar 30 06:20:02.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.955] STEP: Destroying namespace "projected-4757" for this suite.
I0330 07:41:23.955] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":340,"completed":76,"skipped":1237,"failed":0}
I0330 07:41:23.956] SSSS
I0330 07:41:23.956] ------------------------------
I0330 07:41:23.956] [sig-network] EndpointSlice 
I0330 07:41:23.956]   should have Endpoints and EndpointSlices pointing to API Server [Conformance]
I0330 07:41:23.956]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.956] [BeforeEach] [sig-network] EndpointSlice
... skipping 13 lines ...
I0330 07:41:23.959]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.959] I0330 06:20:03.008537      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.959] [AfterEach] [sig-network] EndpointSlice
I0330 07:41:23.960]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.960] Mar 30 06:20:03.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.960] STEP: Destroying namespace "endpointslice-5006" for this suite.
I0330 07:41:23.960] •{"msg":"PASSED [sig-network] EndpointSlice should have Endpoints and EndpointSlices pointing to API Server [Conformance]","total":340,"completed":77,"skipped":1241,"failed":0}
I0330 07:41:23.960] SSSSSSSSSSSSSSSSSS
I0330 07:41:23.960] ------------------------------
I0330 07:41:23.961] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:23.961]   listing validating webhooks should work [Conformance]
I0330 07:41:23.961]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.961] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
I0330 07:41:23.966]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.966] Mar 30 06:20:06.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.966] STEP: Destroying namespace "webhook-1350" for this suite.
I0330 07:41:23.966] STEP: Destroying namespace "webhook-1350-markers" for this suite.
I0330 07:41:23.966] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:23.966]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0330 07:41:23.967] •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":340,"completed":78,"skipped":1259,"failed":0}
I0330 07:41:23.967] SSSSSSSSSS
I0330 07:41:23.967] ------------------------------
I0330 07:41:23.967] [sig-storage] ConfigMap 
I0330 07:41:23.967]   should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
I0330 07:41:23.967]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.968] [BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
I0330 07:41:23.970] I0330 06:20:06.915952      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.970] [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
I0330 07:41:23.970]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.970] I0330 06:20:06.918824      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.971] STEP: Creating configMap with name configmap-test-volume-e3d6d021-2675-4cec-ad49-eef69a3d9dab
I0330 07:41:23.971] STEP: Creating a pod to test consume configMaps
I0330 07:41:23.971] Mar 30 06:20:06.929: INFO: Waiting up to 5m0s for pod "pod-configmaps-5258df02-679d-4db4-99d7-bfbf98674359" in namespace "configmap-2339" to be "Succeeded or Failed"
I0330 07:41:23.971] Mar 30 06:20:06.933: INFO: Pod "pod-configmaps-5258df02-679d-4db4-99d7-bfbf98674359": Phase="Pending", Reason="", readiness=false. Elapsed: 4.282191ms
I0330 07:41:23.972] Mar 30 06:20:08.939: INFO: Pod "pod-configmaps-5258df02-679d-4db4-99d7-bfbf98674359": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010269916s
I0330 07:41:23.972] STEP: Saw pod success
I0330 07:41:23.972] Mar 30 06:20:08.939: INFO: Pod "pod-configmaps-5258df02-679d-4db4-99d7-bfbf98674359" satisfied condition "Succeeded or Failed"
I0330 07:41:23.972] Mar 30 06:20:08.942: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-5258df02-679d-4db4-99d7-bfbf98674359 container configmap-volume-test: <nil>
I0330 07:41:23.972] STEP: delete the pod
I0330 07:41:23.972] Mar 30 06:20:08.954: INFO: Waiting for pod pod-configmaps-5258df02-679d-4db4-99d7-bfbf98674359 to disappear
I0330 07:41:23.973] Mar 30 06:20:08.957: INFO: Pod pod-configmaps-5258df02-679d-4db4-99d7-bfbf98674359 no longer exists
I0330 07:41:23.973] [AfterEach] [sig-storage] ConfigMap
I0330 07:41:23.973]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.973] Mar 30 06:20:08.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.973] STEP: Destroying namespace "configmap-2339" for this suite.
I0330 07:41:23.974] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":340,"completed":79,"skipped":1269,"failed":0}
I0330 07:41:23.974] SSSSSSSSSSSSSSSSSS
I0330 07:41:23.974] ------------------------------
I0330 07:41:23.974] [sig-node] NoExecuteTaintManager Multiple Pods [Serial] 
I0330 07:41:23.975]   evicts pods with minTolerationSeconds [Disruptive] [Conformance]
I0330 07:41:23.975]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.975] [BeforeEach] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
... skipping 34 lines ...
I0330 07:41:23.981] • [SLOW TEST:94.560 seconds]
I0330 07:41:23.981] [sig-node] NoExecuteTaintManager Multiple Pods [Serial]
I0330 07:41:23.981] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
I0330 07:41:23.981]   evicts pods with minTolerationSeconds [Disruptive] [Conformance]
I0330 07:41:23.982]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.982] ------------------------------
I0330 07:41:23.982] {"msg":"PASSED [sig-node] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Disruptive] [Conformance]","total":340,"completed":80,"skipped":1287,"failed":0}
I0330 07:41:23.982] SSSSSSSSSSSSSSSSSSS
I0330 07:41:23.982] ------------------------------
I0330 07:41:23.982] [sig-node] Pods 
I0330 07:41:23.982]   should delete a collection of pods [Conformance]
I0330 07:41:23.982]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.982] [BeforeEach] [sig-node] Pods
... skipping 19 lines ...
I0330 07:41:23.985] STEP: waiting for all 3 pods to be located
I0330 07:41:23.985] STEP: waiting for all pods to be deleted
I0330 07:41:23.986] [AfterEach] [sig-node] Pods
I0330 07:41:23.986]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.986] Mar 30 06:21:43.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.986] STEP: Destroying namespace "pods-9186" for this suite.
I0330 07:41:23.986] •{"msg":"PASSED [sig-node] Pods should delete a collection of pods [Conformance]","total":340,"completed":81,"skipped":1306,"failed":0}
I0330 07:41:23.986] SSSSSSSSSS
I0330 07:41:23.986] ------------------------------
I0330 07:41:23.986] [sig-storage] Projected secret 
I0330 07:41:23.987]   should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.987]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.987] [BeforeEach] [sig-storage] Projected secret
... skipping 9 lines ...
I0330 07:41:23.989] I0330 06:21:43.635804      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.989] [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:23.989]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.990] I0330 06:21:43.638107      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:23.990] STEP: Creating projection with secret that has name projected-secret-test-map-e19ef0be-2794-4cff-9741-87ec64a64008
I0330 07:41:23.990] STEP: Creating a pod to test consume secrets
I0330 07:41:23.990] Mar 30 06:21:43.646: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b8ca774-6ad3-4f5b-9fa9-0fd56c5476d1" in namespace "projected-4287" to be "Succeeded or Failed"
I0330 07:41:23.991] Mar 30 06:21:43.648: INFO: Pod "pod-projected-secrets-2b8ca774-6ad3-4f5b-9fa9-0fd56c5476d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.133894ms
I0330 07:41:23.991] Mar 30 06:21:45.654: INFO: Pod "pod-projected-secrets-2b8ca774-6ad3-4f5b-9fa9-0fd56c5476d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007990741s
I0330 07:41:23.991] Mar 30 06:21:47.660: INFO: Pod "pod-projected-secrets-2b8ca774-6ad3-4f5b-9fa9-0fd56c5476d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013822997s
I0330 07:41:23.991] STEP: Saw pod success
I0330 07:41:23.992] Mar 30 06:21:47.660: INFO: Pod "pod-projected-secrets-2b8ca774-6ad3-4f5b-9fa9-0fd56c5476d1" satisfied condition "Succeeded or Failed"
I0330 07:41:23.992] Mar 30 06:21:47.663: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-2b8ca774-6ad3-4f5b-9fa9-0fd56c5476d1 container projected-secret-volume-test: <nil>
I0330 07:41:23.992] STEP: delete the pod
I0330 07:41:23.992] Mar 30 06:21:47.685: INFO: Waiting for pod pod-projected-secrets-2b8ca774-6ad3-4f5b-9fa9-0fd56c5476d1 to disappear
I0330 07:41:23.992] Mar 30 06:21:47.687: INFO: Pod pod-projected-secrets-2b8ca774-6ad3-4f5b-9fa9-0fd56c5476d1 no longer exists
I0330 07:41:23.992] [AfterEach] [sig-storage] Projected secret
I0330 07:41:23.993]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.993] Mar 30 06:21:47.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.993] STEP: Destroying namespace "projected-4287" for this suite.
I0330 07:41:23.993] •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":82,"skipped":1316,"failed":0}
I0330 07:41:23.993] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:23.993] ------------------------------
I0330 07:41:23.993] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:23.993]   should include webhook resources in discovery documents [Conformance]
I0330 07:41:23.994]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:23.994] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 31 lines ...
I0330 07:41:23.998]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:23.998] Mar 30 06:21:51.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:23.999] STEP: Destroying namespace "webhook-6485" for this suite.
I0330 07:41:23.999] STEP: Destroying namespace "webhook-6485-markers" for this suite.
I0330 07:41:23.999] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:23.999]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0330 07:41:24.000] •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":340,"completed":83,"skipped":1349,"failed":0}
I0330 07:41:24.000] SS
I0330 07:41:24.000] ------------------------------
I0330 07:41:24.000] [sig-node] Variable Expansion 
I0330 07:41:24.000]   should succeed in writing subpaths in container [Slow] [Conformance]
I0330 07:41:24.000]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.000] [BeforeEach] [sig-node] Variable Expansion
... skipping 32 lines ...
I0330 07:41:24.006] • [SLOW TEST:36.709 seconds]
I0330 07:41:24.006] [sig-node] Variable Expansion
I0330 07:41:24.006] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:24.007]   should succeed in writing subpaths in container [Slow] [Conformance]
I0330 07:41:24.007]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.007] ------------------------------
I0330 07:41:24.007] {"msg":"PASSED [sig-node] Variable Expansion should succeed in writing subpaths in container [Slow] [Conformance]","total":340,"completed":84,"skipped":1351,"failed":0}
I0330 07:41:24.007] S
I0330 07:41:24.007] ------------------------------
I0330 07:41:24.007] [sig-apps] Deployment 
I0330 07:41:24.007]   RecreateDeployment should delete old pods and create new ones [Conformance]
I0330 07:41:24.008]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.008] [BeforeEach] [sig-apps] Deployment
... skipping 34 lines ...
I0330 07:41:24.020] Mar 30 06:22:30.098: INFO: Pod "test-recreate-deployment-85d47dcb4-5m5d9" is not available:
I0330 07:41:24.027] &Pod{ObjectMeta:{test-recreate-deployment-85d47dcb4-5m5d9 test-recreate-deployment-85d47dcb4- deployment-3683  4f1e1b35-6381-4dee-a6bd-03e05eed2a3a 8386 0 2021-03-30 06:22:30 +0000 UTC <nil> <nil> map[name:sample-pod-3 pod-template-hash:85d47dcb4] map[] [{apps/v1 ReplicaSet test-recreate-deployment-85d47dcb4 f1889ad1-3b82-44ea-8b48-ff3ec8ad2cd1 0xc004728010 0xc004728011}] []  [{kube-controller-manager Update v1 2021-03-30 06:22:30 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"f1889ad1-3b82-44ea-8b48-ff3ec8ad2cd1\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-30 06:22:30 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-8z44c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8z44c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:22:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:22:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:22:30 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:22:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2021-03-30 06:22:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0330 07:41:24.027] [AfterEach] [sig-apps] Deployment
I0330 07:41:24.027]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.027] Mar 30 06:22:30.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.028] STEP: Destroying namespace "deployment-3683" for this suite.
I0330 07:41:24.028] •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":340,"completed":85,"skipped":1352,"failed":0}
I0330 07:41:24.028] SSSSS
I0330 07:41:24.028] ------------------------------
I0330 07:41:24.028] [sig-storage] Projected downwardAPI 
I0330 07:41:24.028]   should provide container's memory request [NodeConformance] [Conformance]
I0330 07:41:24.028]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.028] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 10 lines ...
I0330 07:41:24.030] [BeforeEach] [sig-storage] Projected downwardAPI
I0330 07:41:24.030]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
I0330 07:41:24.030] I0330 06:22:30.133813      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.030] [It] should provide container's memory request [NodeConformance] [Conformance]
I0330 07:41:24.031]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.031] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:24.031] Mar 30 06:22:30.140: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4e2091b6-0752-4032-9311-ec6968fc43c3" in namespace "projected-2467" to be "Succeeded or Failed"
I0330 07:41:24.031] Mar 30 06:22:30.146: INFO: Pod "downwardapi-volume-4e2091b6-0752-4032-9311-ec6968fc43c3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.693953ms
I0330 07:41:24.031] Mar 30 06:22:32.152: INFO: Pod "downwardapi-volume-4e2091b6-0752-4032-9311-ec6968fc43c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01138485s
I0330 07:41:24.031] STEP: Saw pod success
I0330 07:41:24.032] Mar 30 06:22:32.152: INFO: Pod "downwardapi-volume-4e2091b6-0752-4032-9311-ec6968fc43c3" satisfied condition "Succeeded or Failed"
I0330 07:41:24.032] Mar 30 06:22:32.154: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-4e2091b6-0752-4032-9311-ec6968fc43c3 container client-container: <nil>
I0330 07:41:24.032] STEP: delete the pod
I0330 07:41:24.032] Mar 30 06:22:32.173: INFO: Waiting for pod downwardapi-volume-4e2091b6-0752-4032-9311-ec6968fc43c3 to disappear
I0330 07:41:24.032] Mar 30 06:22:32.175: INFO: Pod downwardapi-volume-4e2091b6-0752-4032-9311-ec6968fc43c3 no longer exists
I0330 07:41:24.032] [AfterEach] [sig-storage] Projected downwardAPI
I0330 07:41:24.032]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.033] Mar 30 06:22:32.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.033] STEP: Destroying namespace "projected-2467" for this suite.
I0330 07:41:24.033] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":340,"completed":86,"skipped":1357,"failed":0}
I0330 07:41:24.033] SSSS
I0330 07:41:24.033] ------------------------------
I0330 07:41:24.033] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
I0330 07:41:24.033]   getting/updating/patching custom resource definition status sub-resource works  [Conformance]
I0330 07:41:24.033]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.034] [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
I0330 07:41:24.036]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.036] Mar 30 06:22:32.206: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:24.036] [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
I0330 07:41:24.036]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.036] Mar 30 06:22:32.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.036] STEP: Destroying namespace "custom-resource-definition-2571" for this suite.
I0330 07:41:24.037] •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":340,"completed":87,"skipped":1361,"failed":0}
I0330 07:41:24.037] SSSSSS
I0330 07:41:24.037] ------------------------------
I0330 07:41:24.037] [sig-cli] Kubectl client Kubectl label 
I0330 07:41:24.037]   should update the label on a resource  [Conformance]
I0330 07:41:24.037]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.037] [BeforeEach] [sig-cli] Kubectl client
... skipping 53 lines ...
I0330 07:41:24.048] Mar 30 06:22:35.768: INFO: stderr: ""
I0330 07:41:24.049] Mar 30 06:22:35.768: INFO: stdout: ""
I0330 07:41:24.049] [AfterEach] [sig-cli] Kubectl client
I0330 07:41:24.049]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.049] Mar 30 06:22:35.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.049] STEP: Destroying namespace "kubectl-3539" for this suite.
I0330 07:41:24.050] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":340,"completed":88,"skipped":1367,"failed":0}
I0330 07:41:24.050] SSSSSSSSSSS
I0330 07:41:24.050] ------------------------------
I0330 07:41:24.050] [sig-storage] Downward API volume 
I0330 07:41:24.050]   should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
I0330 07:41:24.050]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.051] [BeforeEach] [sig-storage] Downward API volume
... skipping 10 lines ...
I0330 07:41:24.053] [BeforeEach] [sig-storage] Downward API volume
I0330 07:41:24.053]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
I0330 07:41:24.053] [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
I0330 07:41:24.053]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.053] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:24.053] I0330 06:22:35.802286      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.054] Mar 30 06:22:35.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0176026-85a7-4958-99c7-1b6793b37eb9" in namespace "downward-api-7567" to be "Succeeded or Failed"
I0330 07:41:24.054] Mar 30 06:22:35.810: INFO: Pod "downwardapi-volume-b0176026-85a7-4958-99c7-1b6793b37eb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154397ms
I0330 07:41:24.054] Mar 30 06:22:37.814: INFO: Pod "downwardapi-volume-b0176026-85a7-4958-99c7-1b6793b37eb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006444659s
I0330 07:41:24.054] STEP: Saw pod success
I0330 07:41:24.054] Mar 30 06:22:37.814: INFO: Pod "downwardapi-volume-b0176026-85a7-4958-99c7-1b6793b37eb9" satisfied condition "Succeeded or Failed"
I0330 07:41:24.054] Mar 30 06:22:37.816: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-b0176026-85a7-4958-99c7-1b6793b37eb9 container client-container: <nil>
I0330 07:41:24.055] STEP: delete the pod
I0330 07:41:24.055] Mar 30 06:22:37.831: INFO: Waiting for pod downwardapi-volume-b0176026-85a7-4958-99c7-1b6793b37eb9 to disappear
I0330 07:41:24.055] Mar 30 06:22:37.833: INFO: Pod downwardapi-volume-b0176026-85a7-4958-99c7-1b6793b37eb9 no longer exists
I0330 07:41:24.055] [AfterEach] [sig-storage] Downward API volume
I0330 07:41:24.055]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.055] Mar 30 06:22:37.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.055] STEP: Destroying namespace "downward-api-7567" for this suite.
I0330 07:41:24.056] •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":340,"completed":89,"skipped":1378,"failed":0}
I0330 07:41:24.056] SSSSSSSSSS
I0330 07:41:24.056] ------------------------------
I0330 07:41:24.056] [sig-api-machinery] Garbage collector 
I0330 07:41:24.056]   should orphan pods created by rc if delete options say so [Conformance]
I0330 07:41:24.056]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.056] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 13 lines ...
I0330 07:41:24.059] STEP: create the rc
I0330 07:41:24.059] STEP: delete the rc
I0330 07:41:24.060] STEP: wait for the rc to be deleted
I0330 07:41:24.060] STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
I0330 07:41:24.060] STEP: Gathering metrics
I0330 07:41:24.060] W0330 06:23:17.901154      19 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
I0330 07:41:24.060] Mar 30 06:24:19.917: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
I0330 07:41:24.061] Mar 30 06:24:19.917: INFO: Deleting pod "simpletest.rc-4g5sd" in namespace "gc-8854"
I0330 07:41:24.061] Mar 30 06:24:19.933: INFO: Deleting pod "simpletest.rc-7nsbq" in namespace "gc-8854"
I0330 07:41:24.061] Mar 30 06:24:19.940: INFO: Deleting pod "simpletest.rc-c9zvp" in namespace "gc-8854"
I0330 07:41:24.061] Mar 30 06:24:19.949: INFO: Deleting pod "simpletest.rc-cps4r" in namespace "gc-8854"
I0330 07:41:24.061] Mar 30 06:24:19.957: INFO: Deleting pod "simpletest.rc-hc76t" in namespace "gc-8854"
I0330 07:41:24.061] Mar 30 06:24:19.974: INFO: Deleting pod "simpletest.rc-hfppp" in namespace "gc-8854"
... skipping 9 lines ...
I0330 07:41:24.063] • [SLOW TEST:102.250 seconds]
I0330 07:41:24.063] [sig-api-machinery] Garbage collector
I0330 07:41:24.063] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.063]   should orphan pods created by rc if delete options say so [Conformance]
I0330 07:41:24.063]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.063] ------------------------------
I0330 07:41:24.063] {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":340,"completed":90,"skipped":1388,"failed":0}
I0330 07:41:24.064] SSSSSSSSSSS
I0330 07:41:24.064] ------------------------------
I0330 07:41:24.064] [sig-storage] Projected configMap 
I0330 07:41:24.064]   should be consumable from pods in volume [NodeConformance] [Conformance]
I0330 07:41:24.064]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.064] [BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
I0330 07:41:24.066] I0330 06:24:20.165128      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.066] I0330 06:24:20.172134      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.066] [It] should be consumable from pods in volume [NodeConformance] [Conformance]
I0330 07:41:24.066]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.066] STEP: Creating configMap with name projected-configmap-test-volume-e4893512-1bee-452f-9a46-7618666ca16d
I0330 07:41:24.066] STEP: Creating a pod to test consume configMaps
I0330 07:41:24.066] Mar 30 06:24:20.186: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-becac60e-e357-453a-851e-ca1a3289cadf" in namespace "projected-8640" to be "Succeeded or Failed"
I0330 07:41:24.067] Mar 30 06:24:20.193: INFO: Pod "pod-projected-configmaps-becac60e-e357-453a-851e-ca1a3289cadf": Phase="Pending", Reason="", readiness=false. Elapsed: 7.342903ms
I0330 07:41:24.067] Mar 30 06:24:22.199: INFO: Pod "pod-projected-configmaps-becac60e-e357-453a-851e-ca1a3289cadf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013043237s
I0330 07:41:24.067] STEP: Saw pod success
I0330 07:41:24.067] Mar 30 06:24:22.199: INFO: Pod "pod-projected-configmaps-becac60e-e357-453a-851e-ca1a3289cadf" satisfied condition "Succeeded or Failed"
I0330 07:41:24.067] Mar 30 06:24:22.202: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-becac60e-e357-453a-851e-ca1a3289cadf container agnhost-container: <nil>
I0330 07:41:24.067] STEP: delete the pod
I0330 07:41:24.068] Mar 30 06:24:22.226: INFO: Waiting for pod pod-projected-configmaps-becac60e-e357-453a-851e-ca1a3289cadf to disappear
I0330 07:41:24.068] Mar 30 06:24:22.231: INFO: Pod pod-projected-configmaps-becac60e-e357-453a-851e-ca1a3289cadf no longer exists
I0330 07:41:24.068] [AfterEach] [sig-storage] Projected configMap
I0330 07:41:24.068]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.068] Mar 30 06:24:22.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.068] STEP: Destroying namespace "projected-8640" for this suite.
I0330 07:41:24.068] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":340,"completed":91,"skipped":1399,"failed":0}
I0330 07:41:24.068] SSSSS
I0330 07:41:24.068] ------------------------------
I0330 07:41:24.069] [sig-network] Proxy version v1 
I0330 07:41:24.069]   A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]
I0330 07:41:24.069]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.069] [BeforeEach] version v1
... skipping 43 lines ...
I0330 07:41:24.077] Mar 30 06:24:23.352: INFO: Starting http.Client for https://10.96.0.1:443/api/v1/namespaces/proxy-5607/services/test-service/proxy/some/path/with/PUT
I0330 07:41:24.077] Mar 30 06:24:23.356: INFO: http.Client request:PUT | StatusCode:200 | Response:foo | Method:PUT
I0330 07:41:24.078] [AfterEach] version v1
I0330 07:41:24.078]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.078] Mar 30 06:24:23.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.078] STEP: Destroying namespace "proxy-5607" for this suite.
I0330 07:41:24.078] •{"msg":"PASSED [sig-network] Proxy version v1 A set of valid responses are returned for both pod and service ProxyWithPath [Conformance]","total":340,"completed":92,"skipped":1404,"failed":0}
I0330 07:41:24.078] SSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.078] ------------------------------
I0330 07:41:24.078] [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath 
I0330 07:41:24.079]   runs ReplicaSets to verify preemption running path [Conformance]
I0330 07:41:24.079]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.079] [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 56 lines ...
I0330 07:41:24.090] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
I0330 07:41:24.090]   PreemptionExecutionPath
I0330 07:41:24.090]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:451
I0330 07:41:24.090]     runs ReplicaSets to verify preemption running path [Conformance]
I0330 07:41:24.090]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.090] ------------------------------
I0330 07:41:24.091] {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PreemptionExecutionPath runs ReplicaSets to verify preemption running path [Conformance]","total":340,"completed":93,"skipped":1429,"failed":0}
I0330 07:41:24.091] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.091] ------------------------------
I0330 07:41:24.091] [sig-api-machinery] Watchers 
I0330 07:41:24.091]   should be able to restart watching from the last resource version observed by the previous watch [Conformance]
I0330 07:41:24.091]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.091] [BeforeEach] [sig-api-machinery] Watchers
... skipping 23 lines ...
I0330 07:41:24.096] Mar 30 06:25:56.686: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5712  b4496f67-cfbb-4c01-af3e-d0e3f7d73262 9343 0 2021-03-30 06:25:56 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-03-30 06:25:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
I0330 07:41:24.096] Mar 30 06:25:56.687: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-5712  b4496f67-cfbb-4c01-af3e-d0e3f7d73262 9344 0 2021-03-30 06:25:56 +0000 UTC <nil> <nil> map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2021-03-30 06:25:56 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
I0330 07:41:24.096] [AfterEach] [sig-api-machinery] Watchers
I0330 07:41:24.097]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.097] Mar 30 06:25:56.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.097] STEP: Destroying namespace "watch-5712" for this suite.
I0330 07:41:24.097] •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":340,"completed":94,"skipped":1459,"failed":0}
I0330 07:41:24.097] SSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.097] ------------------------------
I0330 07:41:24.097] [sig-scheduling] SchedulerPreemption [Serial] 
I0330 07:41:24.097]   validates lower priority pod preemption by critical pod [Conformance]
I0330 07:41:24.098]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.098] [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 29 lines ...
I0330 07:41:24.101] • [SLOW TEST:88.171 seconds]
I0330 07:41:24.101] [sig-scheduling] SchedulerPreemption [Serial]
I0330 07:41:24.102] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
I0330 07:41:24.102]   validates lower priority pod preemption by critical pod [Conformance]
I0330 07:41:24.102]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.102] ------------------------------
I0330 07:41:24.102] {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod [Conformance]","total":340,"completed":95,"skipped":1486,"failed":0}
I0330 07:41:24.102] SSSSSSSSSSS
I0330 07:41:24.102] ------------------------------
I0330 07:41:24.102] [sig-node] ConfigMap 
I0330 07:41:24.102]   should run through a ConfigMap lifecycle [Conformance]
I0330 07:41:24.103]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.103] [BeforeEach] [sig-node] ConfigMap
... skipping 17 lines ...
I0330 07:41:24.106] STEP: deleting the ConfigMap by collection with a label selector
I0330 07:41:24.106] STEP: listing all ConfigMaps in test namespace
I0330 07:41:24.106] [AfterEach] [sig-node] ConfigMap
I0330 07:41:24.107]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.107] Mar 30 06:27:24.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.107] STEP: Destroying namespace "configmap-8507" for this suite.
I0330 07:41:24.107] •{"msg":"PASSED [sig-node] ConfigMap should run through a ConfigMap lifecycle [Conformance]","total":340,"completed":96,"skipped":1497,"failed":0}
I0330 07:41:24.107] SSSSSSS
I0330 07:41:24.107] ------------------------------
I0330 07:41:24.108] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
I0330 07:41:24.108]   should perform canary updates and phased rolling updates of template modifications [Conformance]
I0330 07:41:24.108]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.108] [BeforeEach] [sig-apps] StatefulSet
... skipping 55 lines ...
I0330 07:41:24.119] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:24.120]   Basic StatefulSet functionality [StatefulSetBasic]
I0330 07:41:24.120]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
I0330 07:41:24.120]     should perform canary updates and phased rolling updates of template modifications [Conformance]
I0330 07:41:24.121]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.121] ------------------------------
I0330 07:41:24.121] {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":340,"completed":97,"skipped":1504,"failed":0}
I0330 07:41:24.121] SSSSSSSSSSSSSSSSS
I0330 07:41:24.121] ------------------------------
I0330 07:41:24.121] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:24.121]   should be able to convert from CR v1 to CR v2 [Conformance]
I0330 07:41:24.122]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.122] [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 33 lines ...
I0330 07:41:24.128] • [SLOW TEST:6.447 seconds]
I0330 07:41:24.128] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
I0330 07:41:24.128] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.129]   should be able to convert from CR v1 to CR v2 [Conformance]
I0330 07:41:24.129]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.129] ------------------------------
I0330 07:41:24.129] {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":340,"completed":98,"skipped":1521,"failed":0}
I0330 07:41:24.129] SSSS
I0330 07:41:24.129] ------------------------------
I0330 07:41:24.129] [sig-storage] EmptyDir wrapper volumes 
I0330 07:41:24.129]   should not cause race condition when used for configmaps [Serial] [Conformance]
I0330 07:41:24.129]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.130] [BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 65 lines ...
I0330 07:41:24.146] • [SLOW TEST:72.108 seconds]
I0330 07:41:24.146] [sig-storage] EmptyDir wrapper volumes
I0330 07:41:24.146] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0330 07:41:24.146]   should not cause race condition when used for configmaps [Serial] [Conformance]
I0330 07:41:24.146]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.146] ------------------------------
I0330 07:41:24.147] {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":340,"completed":99,"skipped":1525,"failed":0}
I0330 07:41:24.147] SSSSSSSSS
I0330 07:41:24.147] ------------------------------
I0330 07:41:24.147] [sig-node] Security Context When creating a container with runAsUser 
I0330 07:41:24.147]   should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.148]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.148] [BeforeEach] [sig-node] Security Context
... skipping 9 lines ...
I0330 07:41:24.150] I0330 06:30:23.828368      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.150] [BeforeEach] [sig-node] Security Context
I0330 07:41:24.150]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
I0330 07:41:24.150] I0330 06:30:23.830952      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.150] [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.150]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.151] Mar 30 06:30:23.836: INFO: Waiting up to 5m0s for pod "busybox-user-65534-e313d088-1d7e-4b51-abf2-5a919036cca9" in namespace "security-context-test-6712" to be "Succeeded or Failed"
I0330 07:41:24.151] Mar 30 06:30:23.839: INFO: Pod "busybox-user-65534-e313d088-1d7e-4b51-abf2-5a919036cca9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.077441ms
I0330 07:41:24.151] Mar 30 06:30:25.846: INFO: Pod "busybox-user-65534-e313d088-1d7e-4b51-abf2-5a919036cca9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01010022s
I0330 07:41:24.151] Mar 30 06:30:27.853: INFO: Pod "busybox-user-65534-e313d088-1d7e-4b51-abf2-5a919036cca9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01677616s
I0330 07:41:24.151] Mar 30 06:30:27.853: INFO: Pod "busybox-user-65534-e313d088-1d7e-4b51-abf2-5a919036cca9" satisfied condition "Succeeded or Failed"
I0330 07:41:24.152] [AfterEach] [sig-node] Security Context
I0330 07:41:24.152]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.152] Mar 30 06:30:27.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.152] STEP: Destroying namespace "security-context-test-6712" for this suite.
I0330 07:41:24.152] •{"msg":"PASSED [sig-node] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":100,"skipped":1534,"failed":0}
I0330 07:41:24.152] SSSSSSSSSSSSSS
I0330 07:41:24.152] ------------------------------
I0330 07:41:24.152] [sig-network] DNS 
I0330 07:41:24.152]   should provide DNS for ExternalName services [Conformance]
I0330 07:41:24.153]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.153] [BeforeEach] [sig-network] DNS
... skipping 32 lines ...
I0330 07:41:24.159] STEP: retrieving the pod
I0330 07:41:24.160] STEP: looking for the results for each expected name from probers
I0330 07:41:24.160] Mar 30 06:30:31.968: INFO: File wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.160] ' instead of 'bar.example.com.'
I0330 07:41:24.160] Mar 30 06:30:31.972: INFO: File jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.160] ' instead of 'bar.example.com.'
I0330 07:41:24.161] Mar 30 06:30:31.973: INFO: Lookups using dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 failed for: [wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local]
I0330 07:41:24.161] 
I0330 07:41:24.161] Mar 30 06:30:36.979: INFO: File wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.161] ' instead of 'bar.example.com.'
I0330 07:41:24.162] Mar 30 06:30:36.983: INFO: File jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.162] ' instead of 'bar.example.com.'
I0330 07:41:24.162] Mar 30 06:30:36.983: INFO: Lookups using dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 failed for: [wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local]
I0330 07:41:24.162] 
I0330 07:41:24.162] Mar 30 06:30:41.977: INFO: File wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.162] ' instead of 'bar.example.com.'
I0330 07:41:24.163] Mar 30 06:30:41.979: INFO: File jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.163] ' instead of 'bar.example.com.'
I0330 07:41:24.163] Mar 30 06:30:41.979: INFO: Lookups using dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 failed for: [wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local]
I0330 07:41:24.163] 
I0330 07:41:24.163] Mar 30 06:30:46.977: INFO: File wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.163] ' instead of 'bar.example.com.'
I0330 07:41:24.164] Mar 30 06:30:46.981: INFO: File jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.164] ' instead of 'bar.example.com.'
I0330 07:41:24.164] Mar 30 06:30:46.981: INFO: Lookups using dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 failed for: [wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local]
I0330 07:41:24.164] 
I0330 07:41:24.164] Mar 30 06:30:51.978: INFO: File wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.164] ' instead of 'bar.example.com.'
I0330 07:41:24.164] Mar 30 06:30:51.980: INFO: File jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.165] ' instead of 'bar.example.com.'
I0330 07:41:24.165] Mar 30 06:30:51.980: INFO: Lookups using dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 failed for: [wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local]
I0330 07:41:24.165] 
I0330 07:41:24.165] Mar 30 06:30:56.978: INFO: File wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.166] ' instead of 'bar.example.com.'
I0330 07:41:24.166] Mar 30 06:30:56.981: INFO: File jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local from pod  dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 contains 'foo.example.com.
I0330 07:41:24.166] ' instead of 'bar.example.com.'
I0330 07:41:24.166] Mar 30 06:30:56.981: INFO: Lookups using dns-5157/dns-test-41ef323b-9976-4b36-9576-c3ce65150552 failed for: [wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local jessie_udp@dns-test-service-3.dns-5157.svc.cluster.local]
I0330 07:41:24.166] 
I0330 07:41:24.167] Mar 30 06:31:01.981: INFO: DNS probes using dns-test-41ef323b-9976-4b36-9576-c3ce65150552 succeeded
I0330 07:41:24.167] 
I0330 07:41:24.167] STEP: deleting the pod
I0330 07:41:24.167] STEP: changing the service to type=ClusterIP
I0330 07:41:24.167] STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5157.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5157.svc.cluster.local; sleep 1; done
... skipping 17 lines ...
I0330 07:41:24.171] • [SLOW TEST:36.232 seconds]
I0330 07:41:24.171] [sig-network] DNS
I0330 07:41:24.171] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:24.171]   should provide DNS for ExternalName services [Conformance]
I0330 07:41:24.171]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.171] ------------------------------
I0330 07:41:24.172] {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":340,"completed":101,"skipped":1548,"failed":0}
I0330 07:41:24.172] SSSSSSS
I0330 07:41:24.172] ------------------------------
I0330 07:41:24.172] [sig-storage] Secrets 
I0330 07:41:24.172]   should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
I0330 07:41:24.173]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.173] [BeforeEach] [sig-storage] Secrets
... skipping 12 lines ...
I0330 07:41:24.175] I0330 06:31:04.144199      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.175] I0330 06:31:04.153683      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.176] I0330 06:31:04.153722      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.176] I0330 06:31:04.221464      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.176] STEP: Creating secret with name secret-test-3266c620-6894-4492-b039-f34f4439d5ec
I0330 07:41:24.176] STEP: Creating a pod to test consume secrets
I0330 07:41:24.177] Mar 30 06:31:04.233: INFO: Waiting up to 5m0s for pod "pod-secrets-bf654e87-d979-4a4a-87f4-090ba0e3d1f0" in namespace "secrets-3459" to be "Succeeded or Failed"
I0330 07:41:24.177] Mar 30 06:31:04.236: INFO: Pod "pod-secrets-bf654e87-d979-4a4a-87f4-090ba0e3d1f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.807597ms
I0330 07:41:24.177] Mar 30 06:31:06.247: INFO: Pod "pod-secrets-bf654e87-d979-4a4a-87f4-090ba0e3d1f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013802764s
I0330 07:41:24.177] STEP: Saw pod success
I0330 07:41:24.178] Mar 30 06:31:06.247: INFO: Pod "pod-secrets-bf654e87-d979-4a4a-87f4-090ba0e3d1f0" satisfied condition "Succeeded or Failed"
I0330 07:41:24.178] Mar 30 06:31:06.252: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-bf654e87-d979-4a4a-87f4-090ba0e3d1f0 container secret-volume-test: <nil>
I0330 07:41:24.178] STEP: delete the pod
I0330 07:41:24.178] Mar 30 06:31:06.280: INFO: Waiting for pod pod-secrets-bf654e87-d979-4a4a-87f4-090ba0e3d1f0 to disappear
I0330 07:41:24.178] Mar 30 06:31:06.282: INFO: Pod pod-secrets-bf654e87-d979-4a4a-87f4-090ba0e3d1f0 no longer exists
I0330 07:41:24.178] [AfterEach] [sig-storage] Secrets
I0330 07:41:24.178]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.179] Mar 30 06:31:06.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.179] STEP: Destroying namespace "secrets-3459" for this suite.
I0330 07:41:24.179] STEP: Destroying namespace "secret-namespace-61" for this suite.
I0330 07:41:24.179] •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":340,"completed":102,"skipped":1555,"failed":0}
I0330 07:41:24.179] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.179] ------------------------------
I0330 07:41:24.180] [sig-api-machinery] ResourceQuota 
I0330 07:41:24.180]   should create a ResourceQuota and capture the life of a secret. [Conformance]
I0330 07:41:24.180]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.180] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 26 lines ...
I0330 07:41:24.184] • [SLOW TEST:17.103 seconds]
I0330 07:41:24.184] [sig-api-machinery] ResourceQuota
I0330 07:41:24.184] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.185]   should create a ResourceQuota and capture the life of a secret. [Conformance]
I0330 07:41:24.185]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.185] ------------------------------
I0330 07:41:24.185] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":340,"completed":103,"skipped":1629,"failed":0}
I0330 07:41:24.186] SSSSSSSSSSSSS
I0330 07:41:24.186] ------------------------------
I0330 07:41:24.186] [sig-network] HostPort 
I0330 07:41:24.186]   validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
I0330 07:41:24.186]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.186] [BeforeEach] [sig-network] HostPort
... skipping 41 lines ...
I0330 07:41:24.194] • [SLOW TEST:15.340 seconds]
I0330 07:41:24.194] [sig-network] HostPort
I0330 07:41:24.195] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:24.195]   validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]
I0330 07:41:24.195]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.195] ------------------------------
I0330 07:41:24.195] {"msg":"PASSED [sig-network] HostPort validates that there is no conflict between pods with same hostPort but different hostIP and protocol [LinuxOnly] [Conformance]","total":340,"completed":104,"skipped":1642,"failed":0}
I0330 07:41:24.195] SSSSS
I0330 07:41:24.195] ------------------------------
I0330 07:41:24.195] [sig-storage] EmptyDir volumes 
I0330 07:41:24.196]   should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.196]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.196] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:24.197] I0330 06:31:38.770202      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.197] I0330 06:31:38.770232      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.197] I0330 06:31:38.774346      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.198] [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.198]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.198] STEP: Creating a pod to test emptydir 0644 on tmpfs
I0330 07:41:24.198] Mar 30 06:31:38.780: INFO: Waiting up to 5m0s for pod "pod-a49b5f48-ce5c-472d-92c3-adfc1d35a29a" in namespace "emptydir-2281" to be "Succeeded or Failed"
I0330 07:41:24.198] Mar 30 06:31:38.783: INFO: Pod "pod-a49b5f48-ce5c-472d-92c3-adfc1d35a29a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.89647ms
I0330 07:41:24.198] Mar 30 06:31:40.791: INFO: Pod "pod-a49b5f48-ce5c-472d-92c3-adfc1d35a29a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010954007s
I0330 07:41:24.199] STEP: Saw pod success
I0330 07:41:24.199] Mar 30 06:31:40.791: INFO: Pod "pod-a49b5f48-ce5c-472d-92c3-adfc1d35a29a" satisfied condition "Succeeded or Failed"
I0330 07:41:24.199] Mar 30 06:31:40.797: INFO: Trying to get logs from node kind-worker pod pod-a49b5f48-ce5c-472d-92c3-adfc1d35a29a container test-container: <nil>
I0330 07:41:24.199] STEP: delete the pod
I0330 07:41:24.200] Mar 30 06:31:40.839: INFO: Waiting for pod pod-a49b5f48-ce5c-472d-92c3-adfc1d35a29a to disappear
I0330 07:41:24.200] Mar 30 06:31:40.843: INFO: Pod pod-a49b5f48-ce5c-472d-92c3-adfc1d35a29a no longer exists
I0330 07:41:24.200] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:24.200]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.200] Mar 30 06:31:40.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.200] STEP: Destroying namespace "emptydir-2281" for this suite.
I0330 07:41:24.200] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":105,"skipped":1647,"failed":0}
I0330 07:41:24.200] 
I0330 07:41:24.200] ------------------------------
I0330 07:41:24.201] [sig-apps] ReplicaSet 
I0330 07:41:24.201]   Replicaset should have a working scale subresource [Conformance]
I0330 07:41:24.201]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.201] [BeforeEach] [sig-apps] ReplicaSet
... skipping 26 lines ...
I0330 07:41:24.204] • [SLOW TEST:5.102 seconds]
I0330 07:41:24.204] [sig-apps] ReplicaSet
I0330 07:41:24.204] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:24.205]   Replicaset should have a working scale subresource [Conformance]
I0330 07:41:24.205]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.205] ------------------------------
I0330 07:41:24.205] {"msg":"PASSED [sig-apps] ReplicaSet Replicaset should have a working scale subresource [Conformance]","total":340,"completed":106,"skipped":1647,"failed":0}
I0330 07:41:24.205] SSSS
I0330 07:41:24.205] ------------------------------
I0330 07:41:24.205] [sig-node] Container Runtime blackbox test when starting a container that exits 
I0330 07:41:24.205]   should run with the expected status [NodeConformance] [Conformance]
I0330 07:41:24.206]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.206] [BeforeEach] [sig-node] Container Runtime
... skipping 37 lines ...
I0330 07:41:24.211]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:41
I0330 07:41:24.211]     when starting a container that exits
I0330 07:41:24.211]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/runtime.go:42
I0330 07:41:24.211]       should run with the expected status [NodeConformance] [Conformance]
I0330 07:41:24.211]       /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.211] ------------------------------
I0330 07:41:24.211] {"msg":"PASSED [sig-node] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":340,"completed":107,"skipped":1651,"failed":0}
I0330 07:41:24.212] SSSS
I0330 07:41:24.212] ------------------------------
I0330 07:41:24.212] [sig-storage] Projected downwardAPI 
I0330 07:41:24.212]   should provide container's cpu request [NodeConformance] [Conformance]
I0330 07:41:24.212]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.212] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 10 lines ...
I0330 07:41:24.214] [BeforeEach] [sig-storage] Projected downwardAPI
I0330 07:41:24.214]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
I0330 07:41:24.214] [It] should provide container's cpu request [NodeConformance] [Conformance]
I0330 07:41:24.214]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.214] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:24.214] I0330 06:32:06.239965      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.215] Mar 30 06:32:06.247: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8ff2ab24-3e04-413a-92a9-770e6edcdc50" in namespace "projected-4812" to be "Succeeded or Failed"
I0330 07:41:24.215] Mar 30 06:32:06.251: INFO: Pod "downwardapi-volume-8ff2ab24-3e04-413a-92a9-770e6edcdc50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.608242ms
I0330 07:41:24.215] Mar 30 06:32:08.260: INFO: Pod "downwardapi-volume-8ff2ab24-3e04-413a-92a9-770e6edcdc50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013173067s
I0330 07:41:24.215] STEP: Saw pod success
I0330 07:41:24.215] Mar 30 06:32:08.260: INFO: Pod "downwardapi-volume-8ff2ab24-3e04-413a-92a9-770e6edcdc50" satisfied condition "Succeeded or Failed"
I0330 07:41:24.215] Mar 30 06:32:08.263: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-8ff2ab24-3e04-413a-92a9-770e6edcdc50 container client-container: <nil>
I0330 07:41:24.215] STEP: delete the pod
I0330 07:41:24.216] Mar 30 06:32:08.285: INFO: Waiting for pod downwardapi-volume-8ff2ab24-3e04-413a-92a9-770e6edcdc50 to disappear
I0330 07:41:24.216] Mar 30 06:32:08.288: INFO: Pod downwardapi-volume-8ff2ab24-3e04-413a-92a9-770e6edcdc50 no longer exists
I0330 07:41:24.216] [AfterEach] [sig-storage] Projected downwardAPI
I0330 07:41:24.216]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.216] Mar 30 06:32:08.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.216] STEP: Destroying namespace "projected-4812" for this suite.
I0330 07:41:24.216] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":340,"completed":108,"skipped":1655,"failed":0}
I0330 07:41:24.217] SSSSSSSSSSSS
I0330 07:41:24.217] ------------------------------
I0330 07:41:24.217] [sig-scheduling] SchedulerPredicates [Serial] 
I0330 07:41:24.217]   validates that NodeSelector is respected if not matching  [Conformance]
I0330 07:41:24.218]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.218] [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 37 lines ...
I0330 07:41:24.225] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0330 07:41:24.226]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.226] Mar 30 06:32:09.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.226] STEP: Destroying namespace "sched-pred-9561" for this suite.
I0330 07:41:24.226] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0330 07:41:24.226]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
I0330 07:41:24.226] •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":340,"completed":109,"skipped":1667,"failed":0}
I0330 07:41:24.226] I0330 06:32:09.379210      19 request.go:857] Error in request: resource name may not be empty
I0330 07:41:24.227] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.227] ------------------------------
I0330 07:41:24.227] [sig-storage] Projected configMap 
I0330 07:41:24.227]   should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
I0330 07:41:24.227]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.227] [BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
I0330 07:41:24.229] I0330 06:32:09.430951      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.229] [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
I0330 07:41:24.229]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.229] I0330 06:32:09.433999      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.230] STEP: Creating configMap with name projected-configmap-test-volume-map-1939b6f7-514f-4ce5-a9ae-32c3677b376f
I0330 07:41:24.230] STEP: Creating a pod to test consume configMaps
I0330 07:41:24.230] Mar 30 06:32:09.444: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c9fc7320-8d1d-4956-a974-80e14be30e68" in namespace "projected-2432" to be "Succeeded or Failed"
I0330 07:41:24.230] Mar 30 06:32:09.449: INFO: Pod "pod-projected-configmaps-c9fc7320-8d1d-4956-a974-80e14be30e68": Phase="Pending", Reason="", readiness=false. Elapsed: 5.053848ms
I0330 07:41:24.230] Mar 30 06:32:11.453: INFO: Pod "pod-projected-configmaps-c9fc7320-8d1d-4956-a974-80e14be30e68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009404877s
I0330 07:41:24.230] STEP: Saw pod success
I0330 07:41:24.231] Mar 30 06:32:11.453: INFO: Pod "pod-projected-configmaps-c9fc7320-8d1d-4956-a974-80e14be30e68" satisfied condition "Succeeded or Failed"
I0330 07:41:24.231] Mar 30 06:32:11.456: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-c9fc7320-8d1d-4956-a974-80e14be30e68 container agnhost-container: <nil>
I0330 07:41:24.231] STEP: delete the pod
I0330 07:41:24.231] Mar 30 06:32:11.470: INFO: Waiting for pod pod-projected-configmaps-c9fc7320-8d1d-4956-a974-80e14be30e68 to disappear
I0330 07:41:24.231] Mar 30 06:32:11.473: INFO: Pod pod-projected-configmaps-c9fc7320-8d1d-4956-a974-80e14be30e68 no longer exists
I0330 07:41:24.231] [AfterEach] [sig-storage] Projected configMap
I0330 07:41:24.231]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.232] Mar 30 06:32:11.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.232] STEP: Destroying namespace "projected-2432" for this suite.
I0330 07:41:24.232] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":340,"completed":110,"skipped":1700,"failed":0}
I0330 07:41:24.232] S
I0330 07:41:24.232] ------------------------------
I0330 07:41:24.232] [sig-storage] ConfigMap 
I0330 07:41:24.232]   should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
I0330 07:41:24.232]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.233] [BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
I0330 07:41:24.234] I0330 06:32:11.505496      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.234] I0330 06:32:11.508679      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.234] [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
I0330 07:41:24.235]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.235] STEP: Creating configMap with name configmap-test-volume-map-587af157-a3a7-4f6c-b460-306c2205b0a0
I0330 07:41:24.235] STEP: Creating a pod to test consume configMaps
I0330 07:41:24.235] Mar 30 06:32:11.518: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a595ebf-e67b-4c68-b772-9141b1a58532" in namespace "configmap-7471" to be "Succeeded or Failed"
I0330 07:41:24.235] Mar 30 06:32:11.521: INFO: Pod "pod-configmaps-5a595ebf-e67b-4c68-b772-9141b1a58532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.825389ms
I0330 07:41:24.235] Mar 30 06:32:13.525: INFO: Pod "pod-configmaps-5a595ebf-e67b-4c68-b772-9141b1a58532": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007313838s
I0330 07:41:24.236] STEP: Saw pod success
I0330 07:41:24.236] Mar 30 06:32:13.525: INFO: Pod "pod-configmaps-5a595ebf-e67b-4c68-b772-9141b1a58532" satisfied condition "Succeeded or Failed"
I0330 07:41:24.236] Mar 30 06:32:13.528: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-5a595ebf-e67b-4c68-b772-9141b1a58532 container agnhost-container: <nil>
I0330 07:41:24.236] STEP: delete the pod
I0330 07:41:24.236] Mar 30 06:32:13.542: INFO: Waiting for pod pod-configmaps-5a595ebf-e67b-4c68-b772-9141b1a58532 to disappear
I0330 07:41:24.236] Mar 30 06:32:13.544: INFO: Pod pod-configmaps-5a595ebf-e67b-4c68-b772-9141b1a58532 no longer exists
I0330 07:41:24.236] [AfterEach] [sig-storage] ConfigMap
I0330 07:41:24.236]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.237] Mar 30 06:32:13.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.237] STEP: Destroying namespace "configmap-7471" for this suite.
I0330 07:41:24.237] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":340,"completed":111,"skipped":1701,"failed":0}
I0330 07:41:24.237] SSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.237] ------------------------------
I0330 07:41:24.237] [sig-storage] Projected downwardAPI 
I0330 07:41:24.237]   should provide container's cpu limit [NodeConformance] [Conformance]
I0330 07:41:24.237]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.238] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 10 lines ...
I0330 07:41:24.240] [BeforeEach] [sig-storage] Projected downwardAPI
I0330 07:41:24.240]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
I0330 07:41:24.241] I0330 06:32:13.579737      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.241] [It] should provide container's cpu limit [NodeConformance] [Conformance]
I0330 07:41:24.241]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.241] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:24.241] Mar 30 06:32:13.586: INFO: Waiting up to 5m0s for pod "downwardapi-volume-459221fd-178a-4c04-b93e-12a43588a07e" in namespace "projected-4027" to be "Succeeded or Failed"
I0330 07:41:24.242] Mar 30 06:32:13.589: INFO: Pod "downwardapi-volume-459221fd-178a-4c04-b93e-12a43588a07e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.733575ms
I0330 07:41:24.242] Mar 30 06:32:15.595: INFO: Pod "downwardapi-volume-459221fd-178a-4c04-b93e-12a43588a07e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008952512s
I0330 07:41:24.242] STEP: Saw pod success
I0330 07:41:24.242] Mar 30 06:32:15.595: INFO: Pod "downwardapi-volume-459221fd-178a-4c04-b93e-12a43588a07e" satisfied condition "Succeeded or Failed"
I0330 07:41:24.242] Mar 30 06:32:15.598: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-459221fd-178a-4c04-b93e-12a43588a07e container client-container: <nil>
I0330 07:41:24.243] STEP: delete the pod
I0330 07:41:24.243] Mar 30 06:32:15.614: INFO: Waiting for pod downwardapi-volume-459221fd-178a-4c04-b93e-12a43588a07e to disappear
I0330 07:41:24.243] Mar 30 06:32:15.616: INFO: Pod downwardapi-volume-459221fd-178a-4c04-b93e-12a43588a07e no longer exists
I0330 07:41:24.244] [AfterEach] [sig-storage] Projected downwardAPI
I0330 07:41:24.244]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.244] Mar 30 06:32:15.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.244] STEP: Destroying namespace "projected-4027" for this suite.
I0330 07:41:24.244] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":340,"completed":112,"skipped":1721,"failed":0}
I0330 07:41:24.245] 
I0330 07:41:24.245] ------------------------------
I0330 07:41:24.245] [sig-storage] Secrets 
I0330 07:41:24.245]   should be immutable if `immutable` field is set [Conformance]
I0330 07:41:24.245]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.245] [BeforeEach] [sig-storage] Secrets
... skipping 11 lines ...
I0330 07:41:24.247]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.248] I0330 06:32:15.657604      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.248] [AfterEach] [sig-storage] Secrets
I0330 07:41:24.248]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.248] Mar 30 06:32:15.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.248] STEP: Destroying namespace "secrets-3648" for this suite.
I0330 07:41:24.248] •{"msg":"PASSED [sig-storage] Secrets should be immutable if `immutable` field is set [Conformance]","total":340,"completed":113,"skipped":1721,"failed":0}
I0330 07:41:24.249] S
I0330 07:41:24.249] ------------------------------
I0330 07:41:24.249] [sig-apps] Deployment 
I0330 07:41:24.249]   deployment should support rollover [Conformance]
I0330 07:41:24.249]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.249] [BeforeEach] [sig-apps] Deployment
... skipping 57 lines ...
I0330 07:41:24.277] • [SLOW TEST:21.148 seconds]
I0330 07:41:24.277] [sig-apps] Deployment
I0330 07:41:24.277] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:24.277]   deployment should support rollover [Conformance]
I0330 07:41:24.278]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.278] ------------------------------
I0330 07:41:24.278] {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":340,"completed":114,"skipped":1722,"failed":0}
I0330 07:41:24.278] SSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.278] ------------------------------
I0330 07:41:24.278] [sig-node] InitContainer [NodeConformance] 
I0330 07:41:24.278]   should invoke init containers on a RestartAlways pod [Conformance]
I0330 07:41:24.279]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.279] [BeforeEach] [sig-node] InitContainer [NodeConformance]
... skipping 18 lines ...
I0330 07:41:24.282] [AfterEach] [sig-node] InitContainer [NodeConformance]
I0330 07:41:24.283]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.283] Mar 30 06:32:40.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.283] I0330 06:32:40.915466      19 retrywatcher.go:147] "Stopping RetryWatcher."
I0330 07:41:24.283] I0330 06:32:40.915778      19 retrywatcher.go:275] Stopping RetryWatcher.
I0330 07:41:24.283] STEP: Destroying namespace "init-container-6364" for this suite.
I0330 07:41:24.283] •{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":340,"completed":115,"skipped":1750,"failed":0}
I0330 07:41:24.283] SSSSSSSSSSSSSS
I0330 07:41:24.283] ------------------------------
I0330 07:41:24.284] [sig-apps] Job 
I0330 07:41:24.284]   should adopt matching orphans and release non-matching pods [Conformance]
I0330 07:41:24.284]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.284] [BeforeEach] [sig-apps] Job
... skipping 34 lines ...
I0330 07:41:24.288] • [SLOW TEST:7.099 seconds]
I0330 07:41:24.288] [sig-apps] Job
I0330 07:41:24.288] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:24.289]   should adopt matching orphans and release non-matching pods [Conformance]
I0330 07:41:24.289]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.289] ------------------------------
I0330 07:41:24.289] {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":340,"completed":116,"skipped":1764,"failed":0}
I0330 07:41:24.289] SSSSSSSSS
I0330 07:41:24.289] ------------------------------
I0330 07:41:24.289] [sig-node] Pods 
I0330 07:41:24.289]   should get a host IP [NodeConformance] [Conformance]
I0330 07:41:24.289]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.290] [BeforeEach] [sig-node] Pods
... skipping 17 lines ...
I0330 07:41:24.293] Mar 30 06:32:50.065: INFO: The status of Pod pod-hostip-48610228-1caf-4a6a-9403-1d4c5e424836 is Running (Ready = true)
I0330 07:41:24.294] Mar 30 06:32:50.071: INFO: Pod pod-hostip-48610228-1caf-4a6a-9403-1d4c5e424836 has hostIP: 172.18.0.2
I0330 07:41:24.294] [AfterEach] [sig-node] Pods
I0330 07:41:24.294]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.294] Mar 30 06:32:50.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.294] STEP: Destroying namespace "pods-8839" for this suite.
I0330 07:41:24.295] •{"msg":"PASSED [sig-node] Pods should get a host IP [NodeConformance] [Conformance]","total":340,"completed":117,"skipped":1773,"failed":0}
I0330 07:41:24.295] SSSSS
I0330 07:41:24.295] ------------------------------
I0330 07:41:24.295] [sig-instrumentation] Events 
I0330 07:41:24.295]   should ensure that an event can be fetched, patched, deleted, and listed [Conformance]
I0330 07:41:24.295]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.295] [BeforeEach] [sig-instrumentation] Events
... skipping 17 lines ...
I0330 07:41:24.298] STEP: deleting the test event
I0330 07:41:24.298] STEP: listing all events in all namespaces
I0330 07:41:24.298] [AfterEach] [sig-instrumentation] Events
I0330 07:41:24.298]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.298] Mar 30 06:32:50.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.299] STEP: Destroying namespace "events-2092" for this suite.
I0330 07:41:24.299] •{"msg":"PASSED [sig-instrumentation] Events should ensure that an event can be fetched, patched, deleted, and listed [Conformance]","total":340,"completed":118,"skipped":1778,"failed":0}
I0330 07:41:24.299] SSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.299] ------------------------------
I0330 07:41:24.299] [sig-storage] Projected secret 
I0330 07:41:24.299]   should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.299]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.300] [BeforeEach] [sig-storage] Projected secret
... skipping 9 lines ...
I0330 07:41:24.301] I0330 06:32:50.184121      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.302] [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.302]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.302] STEP: Creating projection with secret that has name projected-secret-test-f94fb0db-8ad0-44e4-8b19-b5cafd3c2b1d
I0330 07:41:24.303] I0330 06:32:50.187354      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.303] STEP: Creating a pod to test consume secrets
I0330 07:41:24.303] Mar 30 06:32:50.200: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1900ee27-2859-4b39-9adc-80f311f3fe3f" in namespace "projected-3457" to be "Succeeded or Failed"
I0330 07:41:24.303] Mar 30 06:32:50.204: INFO: Pod "pod-projected-secrets-1900ee27-2859-4b39-9adc-80f311f3fe3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156067ms
I0330 07:41:24.303] Mar 30 06:32:52.211: INFO: Pod "pod-projected-secrets-1900ee27-2859-4b39-9adc-80f311f3fe3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010718281s
I0330 07:41:24.304] STEP: Saw pod success
I0330 07:41:24.304] Mar 30 06:32:52.211: INFO: Pod "pod-projected-secrets-1900ee27-2859-4b39-9adc-80f311f3fe3f" satisfied condition "Succeeded or Failed"
I0330 07:41:24.304] Mar 30 06:32:52.213: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-1900ee27-2859-4b39-9adc-80f311f3fe3f container projected-secret-volume-test: <nil>
I0330 07:41:24.304] STEP: delete the pod
I0330 07:41:24.304] Mar 30 06:32:52.229: INFO: Waiting for pod pod-projected-secrets-1900ee27-2859-4b39-9adc-80f311f3fe3f to disappear
I0330 07:41:24.304] Mar 30 06:32:52.232: INFO: Pod pod-projected-secrets-1900ee27-2859-4b39-9adc-80f311f3fe3f no longer exists
I0330 07:41:24.304] [AfterEach] [sig-storage] Projected secret
I0330 07:41:24.305]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.305] Mar 30 06:32:52.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.305] STEP: Destroying namespace "projected-3457" for this suite.
I0330 07:41:24.305] •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":119,"skipped":1804,"failed":0}
I0330 07:41:24.305] SSSS
I0330 07:41:24.305] ------------------------------
I0330 07:41:24.305] [sig-node] RuntimeClass 
I0330 07:41:24.305]    should support RuntimeClasses API operations [Conformance]
I0330 07:41:24.305]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.306] [BeforeEach] [sig-node] RuntimeClass
... skipping 24 lines ...
I0330 07:41:24.308] STEP: deleting
I0330 07:41:24.308] STEP: deleting a collection
I0330 07:41:24.309] [AfterEach] [sig-node] RuntimeClass
I0330 07:41:24.309]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.309] Mar 30 06:32:52.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.309] STEP: Destroying namespace "runtimeclass-5593" for this suite.
I0330 07:41:24.309] •{"msg":"PASSED [sig-node] RuntimeClass  should support RuntimeClasses API operations [Conformance]","total":340,"completed":120,"skipped":1808,"failed":0}
I0330 07:41:24.309] SSSS
I0330 07:41:24.309] ------------------------------
I0330 07:41:24.310] [sig-cli] Kubectl client Proxy server 
I0330 07:41:24.310]   should support proxy with --port 0  [Conformance]
I0330 07:41:24.310]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.310] [BeforeEach] [sig-cli] Kubectl client
... skipping 16 lines ...
I0330 07:41:24.313] Mar 30 06:32:52.362: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-963336331 --namespace=kubectl-437 proxy -p 0 --disable-filter'
I0330 07:41:24.313] STEP: curling proxy /api/ output
I0330 07:41:24.313] [AfterEach] [sig-cli] Kubectl client
I0330 07:41:24.313]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.313] Mar 30 06:32:52.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.314] STEP: Destroying namespace "kubectl-437" for this suite.
I0330 07:41:24.314] •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":340,"completed":121,"skipped":1812,"failed":0}
I0330 07:41:24.314] SSSSSSSSSSSSSSSS
I0330 07:41:24.314] ------------------------------
I0330 07:41:24.314] [sig-auth] ServiceAccounts 
I0330 07:41:24.314]   should mount projected service account token [Conformance]
I0330 07:41:24.314]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.314] [BeforeEach] [sig-auth] ServiceAccounts
... skipping 8 lines ...
I0330 07:41:24.316] I0330 06:32:52.487274      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.316] I0330 06:32:52.487290      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.316] I0330 06:32:52.490625      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.316] [It] should mount projected service account token [Conformance]
I0330 07:41:24.316]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.316] STEP: Creating a pod to test service account token: 
I0330 07:41:24.317] Mar 30 06:32:52.496: INFO: Waiting up to 5m0s for pod "test-pod-a7aaa49c-f259-41cb-a2f8-325b57995bf5" in namespace "svcaccounts-7729" to be "Succeeded or Failed"
I0330 07:41:24.317] Mar 30 06:32:52.499: INFO: Pod "test-pod-a7aaa49c-f259-41cb-a2f8-325b57995bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.960702ms
I0330 07:41:24.317] Mar 30 06:32:54.505: INFO: Pod "test-pod-a7aaa49c-f259-41cb-a2f8-325b57995bf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008369178s
I0330 07:41:24.317] STEP: Saw pod success
I0330 07:41:24.317] Mar 30 06:32:54.505: INFO: Pod "test-pod-a7aaa49c-f259-41cb-a2f8-325b57995bf5" satisfied condition "Succeeded or Failed"
I0330 07:41:24.317] Mar 30 06:32:54.508: INFO: Trying to get logs from node kind-worker pod test-pod-a7aaa49c-f259-41cb-a2f8-325b57995bf5 container agnhost-container: <nil>
I0330 07:41:24.317] STEP: delete the pod
I0330 07:41:24.318] Mar 30 06:32:54.523: INFO: Waiting for pod test-pod-a7aaa49c-f259-41cb-a2f8-325b57995bf5 to disappear
I0330 07:41:24.318] Mar 30 06:32:54.526: INFO: Pod test-pod-a7aaa49c-f259-41cb-a2f8-325b57995bf5 no longer exists
I0330 07:41:24.318] [AfterEach] [sig-auth] ServiceAccounts
I0330 07:41:24.318]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.318] Mar 30 06:32:54.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.318] STEP: Destroying namespace "svcaccounts-7729" for this suite.
I0330 07:41:24.318] •{"msg":"PASSED [sig-auth] ServiceAccounts should mount projected service account token [Conformance]","total":340,"completed":122,"skipped":1828,"failed":0}
I0330 07:41:24.318] 
I0330 07:41:24.319] ------------------------------
I0330 07:41:24.319] [sig-storage] ConfigMap 
I0330 07:41:24.319]   should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
I0330 07:41:24.319]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.319] [BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
I0330 07:41:24.320] I0330 06:32:54.561139      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.321] [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
I0330 07:41:24.321]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.321] I0330 06:32:54.563777      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.321] STEP: Creating configMap with name configmap-test-volume-6de5cca7-8b5a-4a5a-9524-dbab41f0ef5f
I0330 07:41:24.321] STEP: Creating a pod to test consume configMaps
I0330 07:41:24.321] Mar 30 06:32:54.573: INFO: Waiting up to 5m0s for pod "pod-configmaps-7a6f8634-7899-4c83-8591-08ddbf9ca156" in namespace "configmap-6287" to be "Succeeded or Failed"
I0330 07:41:24.322] Mar 30 06:32:54.577: INFO: Pod "pod-configmaps-7a6f8634-7899-4c83-8591-08ddbf9ca156": Phase="Pending", Reason="", readiness=false. Elapsed: 3.676686ms
I0330 07:41:24.322] Mar 30 06:32:56.584: INFO: Pod "pod-configmaps-7a6f8634-7899-4c83-8591-08ddbf9ca156": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01081638s
I0330 07:41:24.322] STEP: Saw pod success
I0330 07:41:24.322] Mar 30 06:32:56.584: INFO: Pod "pod-configmaps-7a6f8634-7899-4c83-8591-08ddbf9ca156" satisfied condition "Succeeded or Failed"
I0330 07:41:24.322] Mar 30 06:32:56.587: INFO: Trying to get logs from node kind-worker pod pod-configmaps-7a6f8634-7899-4c83-8591-08ddbf9ca156 container agnhost-container: <nil>
I0330 07:41:24.322] STEP: delete the pod
I0330 07:41:24.322] Mar 30 06:32:56.603: INFO: Waiting for pod pod-configmaps-7a6f8634-7899-4c83-8591-08ddbf9ca156 to disappear
I0330 07:41:24.322] Mar 30 06:32:56.605: INFO: Pod pod-configmaps-7a6f8634-7899-4c83-8591-08ddbf9ca156 no longer exists
I0330 07:41:24.322] [AfterEach] [sig-storage] ConfigMap
I0330 07:41:24.323]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.323] Mar 30 06:32:56.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.323] STEP: Destroying namespace "configmap-6287" for this suite.
I0330 07:41:24.323] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":340,"completed":123,"skipped":1828,"failed":0}
I0330 07:41:24.323] SSSSSS
I0330 07:41:24.323] ------------------------------
I0330 07:41:24.323] [sig-network] Services 
I0330 07:41:24.324]   should provide secure master service  [Conformance]
I0330 07:41:24.324]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.324] [BeforeEach] [sig-network] Services
... skipping 15 lines ...
I0330 07:41:24.326] [AfterEach] [sig-network] Services
I0330 07:41:24.326]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.326] Mar 30 06:32:56.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.326] STEP: Destroying namespace "services-2243" for this suite.
I0330 07:41:24.326] [AfterEach] [sig-network] Services
I0330 07:41:24.327]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
I0330 07:41:24.327] •{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":340,"completed":124,"skipped":1834,"failed":0}
I0330 07:41:24.327] SSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.327] ------------------------------
I0330 07:41:24.327] [sig-node] Variable Expansion 
I0330 07:41:24.327]   should allow substituting values in a container's command [NodeConformance] [Conformance]
I0330 07:41:24.328]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.328] [BeforeEach] [sig-node] Variable Expansion
... skipping 8 lines ...
I0330 07:41:24.329] I0330 06:32:56.683644      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.329] I0330 06:32:56.683750      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.329] [It] should allow substituting values in a container's command [NodeConformance] [Conformance]
I0330 07:41:24.329]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.330] I0330 06:32:56.686307      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.330] STEP: Creating a pod to test substitution in container's command
I0330 07:41:24.330] Mar 30 06:32:56.692: INFO: Waiting up to 5m0s for pod "var-expansion-1be144eb-94c6-4f2a-912d-94b96bae89a4" in namespace "var-expansion-7980" to be "Succeeded or Failed"
I0330 07:41:24.330] Mar 30 06:32:56.694: INFO: Pod "var-expansion-1be144eb-94c6-4f2a-912d-94b96bae89a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324955ms
I0330 07:41:24.330] Mar 30 06:32:58.702: INFO: Pod "var-expansion-1be144eb-94c6-4f2a-912d-94b96bae89a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009720406s
I0330 07:41:24.330] STEP: Saw pod success
I0330 07:41:24.331] Mar 30 06:32:58.702: INFO: Pod "var-expansion-1be144eb-94c6-4f2a-912d-94b96bae89a4" satisfied condition "Succeeded or Failed"
I0330 07:41:24.331] Mar 30 06:32:58.704: INFO: Trying to get logs from node kind-worker pod var-expansion-1be144eb-94c6-4f2a-912d-94b96bae89a4 container dapi-container: <nil>
I0330 07:41:24.331] STEP: delete the pod
I0330 07:41:24.331] Mar 30 06:32:58.718: INFO: Waiting for pod var-expansion-1be144eb-94c6-4f2a-912d-94b96bae89a4 to disappear
I0330 07:41:24.332] Mar 30 06:32:58.720: INFO: Pod var-expansion-1be144eb-94c6-4f2a-912d-94b96bae89a4 no longer exists
I0330 07:41:24.332] [AfterEach] [sig-node] Variable Expansion
I0330 07:41:24.332]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.332] Mar 30 06:32:58.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.332] STEP: Destroying namespace "var-expansion-7980" for this suite.
I0330 07:41:24.332] •{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":340,"completed":125,"skipped":1856,"failed":0}
I0330 07:41:24.332] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.333] ------------------------------
I0330 07:41:24.333] [sig-storage] ConfigMap 
I0330 07:41:24.333]   optional updates should be reflected in volume [NodeConformance] [Conformance]
I0330 07:41:24.333]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.333] [BeforeEach] [sig-storage] ConfigMap
... skipping 20 lines ...
I0330 07:41:24.337] STEP: Creating configMap with name cm-test-opt-create-c9f64192-e382-4323-bb2a-390def4372f7
I0330 07:41:24.337] STEP: waiting to observe update in volume
I0330 07:41:24.337] [AfterEach] [sig-storage] ConfigMap
I0330 07:41:24.337]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.337] Mar 30 06:33:02.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.337] STEP: Destroying namespace "configmap-6959" for this suite.
I0330 07:41:24.337] •{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":340,"completed":126,"skipped":1921,"failed":0}
I0330 07:41:24.337] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.337] ------------------------------
I0330 07:41:24.338] [sig-node] Pods Extended Pods Set QOS Class 
I0330 07:41:24.338]   should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
I0330 07:41:24.338]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.338] [BeforeEach] [sig-node] Pods Extended
... skipping 16 lines ...
I0330 07:41:24.341] STEP: submitting the pod to kubernetes
I0330 07:41:24.341] STEP: verifying QOS class is set on the pod
I0330 07:41:24.341] [AfterEach] [sig-node] Pods Extended
I0330 07:41:24.341]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.341] Mar 30 06:33:02.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.341] STEP: Destroying namespace "pods-3183" for this suite.
I0330 07:41:24.342] •{"msg":"PASSED [sig-node] Pods Extended Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":340,"completed":127,"skipped":1957,"failed":0}
I0330 07:41:24.342] SSSS
I0330 07:41:24.342] ------------------------------
I0330 07:41:24.342] [sig-storage] EmptyDir volumes 
I0330 07:41:24.342]   should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.342]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.343] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:24.344] I0330 06:33:02.948873      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.345] I0330 06:33:02.948887      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.345] [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.345]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.345] STEP: Creating a pod to test emptydir 0666 on tmpfs
I0330 07:41:24.346] I0330 06:33:02.951979      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.346] Mar 30 06:33:02.959: INFO: Waiting up to 5m0s for pod "pod-e747f412-7ea5-416d-b939-fde1d3a5225a" in namespace "emptydir-7735" to be "Succeeded or Failed"
I0330 07:41:24.346] Mar 30 06:33:02.962: INFO: Pod "pod-e747f412-7ea5-416d-b939-fde1d3a5225a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.945238ms
I0330 07:41:24.346] Mar 30 06:33:04.967: INFO: Pod "pod-e747f412-7ea5-416d-b939-fde1d3a5225a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008250098s
I0330 07:41:24.346] STEP: Saw pod success
I0330 07:41:24.346] Mar 30 06:33:04.967: INFO: Pod "pod-e747f412-7ea5-416d-b939-fde1d3a5225a" satisfied condition "Succeeded or Failed"
I0330 07:41:24.347] Mar 30 06:33:04.970: INFO: Trying to get logs from node kind-worker2 pod pod-e747f412-7ea5-416d-b939-fde1d3a5225a container test-container: <nil>
I0330 07:41:24.347] STEP: delete the pod
I0330 07:41:24.347] Mar 30 06:33:04.986: INFO: Waiting for pod pod-e747f412-7ea5-416d-b939-fde1d3a5225a to disappear
I0330 07:41:24.347] Mar 30 06:33:04.988: INFO: Pod pod-e747f412-7ea5-416d-b939-fde1d3a5225a no longer exists
I0330 07:41:24.347] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:24.347]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.347] Mar 30 06:33:04.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.347] STEP: Destroying namespace "emptydir-7735" for this suite.
I0330 07:41:24.348] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":128,"skipped":1961,"failed":0}
I0330 07:41:24.348] SSSSSSSS
I0330 07:41:24.348] ------------------------------
I0330 07:41:24.348] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
I0330 07:41:24.348]   Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
I0330 07:41:24.348]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.348] [BeforeEach] [sig-apps] StatefulSet
... skipping 140 lines ...
I0330 07:41:24.373] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:24.374]   Basic StatefulSet functionality [StatefulSetBasic]
I0330 07:41:24.374]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
I0330 07:41:24.374]     Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
I0330 07:41:24.374]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.374] ------------------------------
I0330 07:41:24.375] {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":340,"completed":129,"skipped":1969,"failed":0}
I0330 07:41:24.375] SSSSSSSSS
I0330 07:41:24.375] ------------------------------
I0330 07:41:24.375] [sig-apps] ReplicaSet 
I0330 07:41:24.375]   should adopt matching pods on creation and release no longer matching pods [Conformance]
I0330 07:41:24.375]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.375] [BeforeEach] [sig-apps] ReplicaSet
... skipping 19 lines ...
I0330 07:41:24.378] Mar 30 06:33:59.827: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
I0330 07:41:24.378] STEP: Then the pod is released
I0330 07:41:24.378] [AfterEach] [sig-apps] ReplicaSet
I0330 07:41:24.378]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.378] Mar 30 06:34:00.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.378] STEP: Destroying namespace "replicaset-8379" for this suite.
I0330 07:41:24.379] •{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":340,"completed":130,"skipped":1978,"failed":0}
I0330 07:41:24.379] 
I0330 07:41:24.379] ------------------------------
I0330 07:41:24.379] [sig-api-machinery] Watchers 
I0330 07:41:24.379]   should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
I0330 07:41:24.380]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.380] [BeforeEach] [sig-api-machinery] Watchers
... skipping 35 lines ...
I0330 07:41:24.390] • [SLOW TEST:10.090 seconds]
I0330 07:41:24.390] [sig-api-machinery] Watchers
I0330 07:41:24.390] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.390]   should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
I0330 07:41:24.390]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.390] ------------------------------
I0330 07:41:24.391] {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":340,"completed":131,"skipped":1978,"failed":0}
I0330 07:41:24.391] SS
I0330 07:41:24.391] ------------------------------
I0330 07:41:24.391] [sig-storage] Subpath Atomic writer volumes 
I0330 07:41:24.391]   should support subpaths with projected pod [LinuxOnly] [Conformance]
I0330 07:41:24.391]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.391] [BeforeEach] [sig-storage] Subpath
... skipping 12 lines ...
I0330 07:41:24.394] STEP: Setting up data
I0330 07:41:24.394] I0330 06:34:10.973195      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.395] [It] should support subpaths with projected pod [LinuxOnly] [Conformance]
I0330 07:41:24.395]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.395] STEP: Creating pod pod-subpath-test-projected-zfkw
I0330 07:41:24.395] STEP: Creating a pod to test atomic-volume-subpath
I0330 07:41:24.395] Mar 30 06:34:10.986: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-zfkw" in namespace "subpath-9968" to be "Succeeded or Failed"
I0330 07:41:24.396] Mar 30 06:34:10.989: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Pending", Reason="", readiness=false. Elapsed: 3.327693ms
I0330 07:41:24.396] Mar 30 06:34:12.995: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Running", Reason="", readiness=true. Elapsed: 2.009545319s
I0330 07:41:24.396] Mar 30 06:34:15.001: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Running", Reason="", readiness=true. Elapsed: 4.015349399s
I0330 07:41:24.397] Mar 30 06:34:17.007: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Running", Reason="", readiness=true. Elapsed: 6.021643307s
I0330 07:41:24.397] Mar 30 06:34:19.014: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Running", Reason="", readiness=true. Elapsed: 8.027698432s
I0330 07:41:24.397] Mar 30 06:34:21.021: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Running", Reason="", readiness=true. Elapsed: 10.035393206s
I0330 07:41:24.397] Mar 30 06:34:23.028: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Running", Reason="", readiness=true. Elapsed: 12.042546088s
I0330 07:41:24.398] Mar 30 06:34:25.034: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Running", Reason="", readiness=true. Elapsed: 14.047907759s
I0330 07:41:24.398] Mar 30 06:34:27.041: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Running", Reason="", readiness=true. Elapsed: 16.054950987s
I0330 07:41:24.398] Mar 30 06:34:29.048: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Running", Reason="", readiness=true. Elapsed: 18.062511123s
I0330 07:41:24.398] Mar 30 06:34:31.055: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Running", Reason="", readiness=true. Elapsed: 20.069191751s
I0330 07:41:24.399] Mar 30 06:34:33.062: INFO: Pod "pod-subpath-test-projected-zfkw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.076379392s
I0330 07:41:24.399] STEP: Saw pod success
I0330 07:41:24.399] Mar 30 06:34:33.062: INFO: Pod "pod-subpath-test-projected-zfkw" satisfied condition "Succeeded or Failed"
I0330 07:41:24.399] Mar 30 06:34:33.065: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-projected-zfkw container test-container-subpath-projected-zfkw: <nil>
I0330 07:41:24.399] STEP: delete the pod
I0330 07:41:24.399] Mar 30 06:34:33.081: INFO: Waiting for pod pod-subpath-test-projected-zfkw to disappear
I0330 07:41:24.400] Mar 30 06:34:33.084: INFO: Pod pod-subpath-test-projected-zfkw no longer exists
I0330 07:41:24.400] STEP: Deleting pod pod-subpath-test-projected-zfkw
I0330 07:41:24.400] Mar 30 06:34:33.084: INFO: Deleting pod "pod-subpath-test-projected-zfkw" in namespace "subpath-9968"
... skipping 7 lines ...
I0330 07:41:24.401] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0330 07:41:24.401]   Atomic writer volumes
I0330 07:41:24.401]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
I0330 07:41:24.401]     should support subpaths with projected pod [LinuxOnly] [Conformance]
I0330 07:41:24.401]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.401] ------------------------------
I0330 07:41:24.402] {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":340,"completed":132,"skipped":1980,"failed":0}
I0330 07:41:24.402] SSSSSSSSS
I0330 07:41:24.402] ------------------------------
I0330 07:41:24.402] [sig-storage] Projected secret 
I0330 07:41:24.402]   should be consumable from pods in volume [NodeConformance] [Conformance]
I0330 07:41:24.402]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.403] [BeforeEach] [sig-storage] Projected secret
... skipping 9 lines ...
I0330 07:41:24.407] I0330 06:34:33.124591      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.407] I0330 06:34:33.127332      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.407] [It] should be consumable from pods in volume [NodeConformance] [Conformance]
I0330 07:41:24.407]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.408] STEP: Creating projection with secret that has name projected-secret-test-bc9c3151-f8bd-448a-91c0-3b3b3b8ba80e
I0330 07:41:24.408] STEP: Creating a pod to test consume secrets
I0330 07:41:24.408] Mar 30 06:34:33.136: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-29b2859d-ff31-4ee8-8044-fac1d0cd769c" in namespace "projected-5933" to be "Succeeded or Failed"
I0330 07:41:24.409] Mar 30 06:34:33.139: INFO: Pod "pod-projected-secrets-29b2859d-ff31-4ee8-8044-fac1d0cd769c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.65855ms
I0330 07:41:24.409] Mar 30 06:34:35.145: INFO: Pod "pod-projected-secrets-29b2859d-ff31-4ee8-8044-fac1d0cd769c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008424534s
I0330 07:41:24.409] STEP: Saw pod success
I0330 07:41:24.410] Mar 30 06:34:35.145: INFO: Pod "pod-projected-secrets-29b2859d-ff31-4ee8-8044-fac1d0cd769c" satisfied condition "Succeeded or Failed"
I0330 07:41:24.410] Mar 30 06:34:35.147: INFO: Trying to get logs from node kind-worker2 pod pod-projected-secrets-29b2859d-ff31-4ee8-8044-fac1d0cd769c container projected-secret-volume-test: <nil>
I0330 07:41:24.410] STEP: delete the pod
I0330 07:41:24.410] Mar 30 06:34:35.162: INFO: Waiting for pod pod-projected-secrets-29b2859d-ff31-4ee8-8044-fac1d0cd769c to disappear
I0330 07:41:24.411] Mar 30 06:34:35.165: INFO: Pod pod-projected-secrets-29b2859d-ff31-4ee8-8044-fac1d0cd769c no longer exists
I0330 07:41:24.411] [AfterEach] [sig-storage] Projected secret
I0330 07:41:24.411]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.412] Mar 30 06:34:35.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.412] STEP: Destroying namespace "projected-5933" for this suite.
I0330 07:41:24.412] •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":340,"completed":133,"skipped":1989,"failed":0}
I0330 07:41:24.412] SSSSSSSS
I0330 07:41:24.413] ------------------------------
I0330 07:41:24.413] [sig-network] DNS 
I0330 07:41:24.413]   should support configurable pod DNS nameservers [Conformance]
I0330 07:41:24.413]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.413] [BeforeEach] [sig-network] DNS
... skipping 22 lines ...
I0330 07:41:24.423] Mar 30 06:34:37.303: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:24.423] Mar 30 06:34:37.390: INFO: Deleting pod test-dns-nameservers...
I0330 07:41:24.423] [AfterEach] [sig-network] DNS
I0330 07:41:24.423]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.423] Mar 30 06:34:37.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.423] STEP: Destroying namespace "dns-8874" for this suite.
I0330 07:41:24.423] •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":340,"completed":134,"skipped":1997,"failed":0}
I0330 07:41:24.424] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.424] ------------------------------
I0330 07:41:24.424] [sig-auth] ServiceAccounts 
I0330 07:41:24.424]   ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
I0330 07:41:24.424]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.424] [BeforeEach] [sig-auth] ServiceAccounts
... skipping 8 lines ...
I0330 07:41:24.427] I0330 06:34:37.445681      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.427] I0330 06:34:37.445703      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.427] [It] ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
I0330 07:41:24.427]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.427] I0330 06:34:37.448852      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.428] Mar 30 06:34:37.458: INFO: created pod
I0330 07:41:24.428] Mar 30 06:34:37.458: INFO: Waiting up to 5m0s for pod "oidc-discovery-validator" in namespace "svcaccounts-4540" to be "Succeeded or Failed"
I0330 07:41:24.428] Mar 30 06:34:37.461: INFO: Pod "oidc-discovery-validator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.40315ms
I0330 07:41:24.428] Mar 30 06:34:39.467: INFO: Pod "oidc-discovery-validator": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009056986s
I0330 07:41:24.428] STEP: Saw pod success
I0330 07:41:24.428] Mar 30 06:34:39.467: INFO: Pod "oidc-discovery-validator" satisfied condition "Succeeded or Failed"
I0330 07:41:24.428] Mar 30 06:35:09.468: INFO: polling logs
I0330 07:41:24.428] Mar 30 06:35:09.475: INFO: Pod logs: 
I0330 07:41:24.428] 2021/03/30 06:34:38 OK: Got token
I0330 07:41:24.429] 2021/03/30 06:34:38 OK: got issuer https://kubernetes.default.svc.cluster.local
I0330 07:41:24.429] 2021/03/30 06:34:38 Full, not-validated claims: 
I0330 07:41:24.429] openidmetadata.claims{Claims:jwt.Claims{Issuer:"https://kubernetes.default.svc.cluster.local", Subject:"system:serviceaccount:svcaccounts-4540:default", Audience:jwt.Audience{"oidc-discovery-test"}, Expiry:1617086677, NotBefore:1617086077, IssuedAt:1617086077, ID:""}, Kubernetes:openidmetadata.kubeClaims{Namespace:"svcaccounts-4540", ServiceAccount:openidmetadata.kubeName{Name:"default", UID:"a819bd69-4fb2-46f5-8849-dba40db9308c"}}}
... skipping 12 lines ...
I0330 07:41:24.431] • [SLOW TEST:32.081 seconds]
I0330 07:41:24.431] [sig-auth] ServiceAccounts
I0330 07:41:24.431] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
I0330 07:41:24.431]   ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]
I0330 07:41:24.432]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.432] ------------------------------
I0330 07:41:24.432] {"msg":"PASSED [sig-auth] ServiceAccounts ServiceAccountIssuerDiscovery should support OIDC discovery of service account issuer [Conformance]","total":340,"completed":135,"skipped":2054,"failed":0}
I0330 07:41:24.432] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.432] ------------------------------
I0330 07:41:24.432] [sig-apps] Deployment 
I0330 07:41:24.432]   deployment should support proportional scaling [Conformance]
I0330 07:41:24.432]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.433] [BeforeEach] [sig-apps] Deployment
... skipping 40 lines ...
I0330 07:41:24.441] 
I0330 07:41:24.441] Mar 30 06:35:15.646: INFO: New ReplicaSet "webserver-deployment-795d758f88" of Deployment "webserver-deployment":
I0330 07:41:24.443] &ReplicaSet{ObjectMeta:{webserver-deployment-795d758f88  deployment-2587  35094ea5-cc37-4544-be72-b80c1fc9effc 13410 3 2021-03-30 06:35:13 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 1dc65ee9-a7e5-4cce-9116-3f5fa500fae3 0xc0055baf67 0xc0055baf68}] []  [{kube-controller-manager Update apps/v1 2021-03-30 06:35:13 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dc65ee9-a7e5-4cce-9116-3f5fa500fae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 795d758f88,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055bb358 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
I0330 07:41:24.443] Mar 30 06:35:15.646: INFO: All old ReplicaSets of Deployment "webserver-deployment":
I0330 07:41:24.445] Mar 30 06:35:15.646: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-847dcfb7fb  deployment-2587  0ad11121-7425-4c44-b127-f04de5999d75 13407 3 2021-03-30 06:35:09 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 1dc65ee9-a7e5-4cce-9116-3f5fa500fae3 0xc0055bb3b7 0xc0055bb3b8}] []  [{kube-controller-manager Update apps/v1 2021-03-30 06:35:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:deployment.kubernetes.io/desired-replicas":{},"f:deployment.kubernetes.io/max-replicas":{},"f:deployment.kubernetes.io/revision":{}},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"1dc65ee9-a7e5-4cce-9116-3f5fa500fae3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:replicas":{},"f:selector":{},"f:template":{"f:metadata":{"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}},"f:status":{"f:availableReplicas":{},"f:fullyLabeledReplicas":{},"f:observedGeneration":{},"f:readyReplicas":{},"f:replicas":{}}}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 847dcfb7fb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [] []  []} {[] [] [{httpd k8s.gcr.io/e2e-test-images/httpd:2.4.38-1 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,} false false false}] [] Always 0xc0055bb508 <nil> ClusterFirst map[]   <nil>  false false false <nil> &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,} []   nil default-scheduler [] []  <nil> nil [] <nil> <nil> <nil> map[] [] <nil>}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:2,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
I0330 07:41:24.446] Mar 30 06:35:15.672: INFO: Pod "webserver-deployment-795d758f88-28cd4" is not available:
I0330 07:41:24.451] &Pod{ObjectMeta:{webserver-deployment-795d758f88-28cd4 webserver-deployment-795d758f88- deployment-2587  b1e7e49d-3e25-4052-bc61-388d3f423f5a 13405 0 2021-03-30 06:35:13 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 35094ea5-cc37-4544-be72-b80c1fc9effc 0xc0052e4017 0xc0052e4018}] []  [{kube-controller-manager Update v1 2021-03-30 06:35:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35094ea5-cc37-4544-be72-b80c1fc9effc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-30 06:35:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:podIP":{},"f:podIPs":{".":{},"k:{\"ip\":\"10.244.1.76\"}":{".":{},"f:ip":{}}},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-ft5sm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ft5sm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.4,PodIP:10.244.1.76,StartTime:2021-03-30 06:35:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.76,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0330 07:41:24.451] Mar 30 06:35:15.672: INFO: Pod "webserver-deployment-795d758f88-5jhv5" is not available:
I0330 07:41:24.455] &Pod{ObjectMeta:{webserver-deployment-795d758f88-5jhv5 webserver-deployment-795d758f88- deployment-2587  dd3dc3a8-400e-45d0-9bcb-1ac31c695395 13427 0 2021-03-30 06:35:15 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 35094ea5-cc37-4544-be72-b80c1fc9effc 0xc0052e4220 0xc0052e4221}] []  [{kube-controller-manager Update v1 2021-03-30 06:35:15 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35094ea5-cc37-4544-be72-b80c1fc9effc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-gcphs,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gcphs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0330 07:41:24.455] Mar 30 06:35:15.673: INFO: Pod "webserver-deployment-795d758f88-6xw77" is not available:
I0330 07:41:24.461] &Pod{ObjectMeta:{webserver-deployment-795d758f88-6xw77 webserver-deployment-795d758f88- deployment-2587  9df61b18-49c0-4c46-8e7b-ed5ff39ef597 13402 0 2021-03-30 06:35:13 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 35094ea5-cc37-4544-be72-b80c1fc9effc 0xc0052e4387 0xc0052e4388}] []  [{kube-controller-manager Update v1 2021-03-30 06:35:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35094ea5-cc37-4544-be72-b80c1fc9effc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-30 06:35:15 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-pjdph,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pjdph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2021-03-30 06:35:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0330 07:41:24.461] Mar 30 06:35:15.673: INFO: Pod "webserver-deployment-795d758f88-8rv75" is not available:
I0330 07:41:24.466] &Pod{ObjectMeta:{webserver-deployment-795d758f88-8rv75 webserver-deployment-795d758f88- deployment-2587  bd4fe57c-8a4b-4e1f-b191-376340fc9f8c 13382 0 2021-03-30 06:35:13 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:795d758f88] map[] [{apps/v1 ReplicaSet webserver-deployment-795d758f88 35094ea5-cc37-4544-be72-b80c1fc9effc 0xc0052e4570 0xc0052e4571}] []  [{kube-controller-manager Update v1 2021-03-30 06:35:13 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"35094ea5-cc37-4544-be72-b80c1fc9effc\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}} {kubelet Update v1 2021-03-30 06:35:14 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"ContainersReady\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Initialized\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:status":{},"f:type":{}},"k:{\"type\":\"Ready\"}":{".":{},"f:lastProbeTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}},"f:containerStatuses":{},"f:hostIP":{},"f:startTime":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-lkznz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lkznz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:35:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.2,PodIP:,StartTime:2021-03-30 06:35:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
... skipping 43 lines ...
I0330 07:41:24.560] • [SLOW TEST:6.225 seconds]
I0330 07:41:24.560] [sig-apps] Deployment
I0330 07:41:24.560] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:24.560]   deployment should support proportional scaling [Conformance]
I0330 07:41:24.561]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.561] ------------------------------
I0330 07:41:24.561] {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":340,"completed":136,"skipped":2094,"failed":0}
I0330 07:41:24.561] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.561] ------------------------------
I0330 07:41:24.561] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
I0330 07:41:24.561]   should execute prestop http hook properly [NodeConformance] [Conformance]
I0330 07:41:24.561]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.561] [BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 52 lines ...
I0330 07:41:24.569] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:24.569]   when create a pod with lifecycle hook
I0330 07:41:24.569]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
I0330 07:41:24.569]     should execute prestop http hook properly [NodeConformance] [Conformance]
I0330 07:41:24.569]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.569] ------------------------------
I0330 07:41:24.569] {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":340,"completed":137,"skipped":2165,"failed":0}
I0330 07:41:24.570] S
I0330 07:41:24.570] ------------------------------
I0330 07:41:24.570] [sig-network] DNS 
I0330 07:41:24.570]   should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
I0330 07:41:24.570]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.570] [BeforeEach] [sig-network] DNS
... skipping 24 lines ...
I0330 07:41:24.574] STEP: deleting the pod
I0330 07:41:24.574] STEP: deleting the test headless service
I0330 07:41:24.575] [AfterEach] [sig-network] DNS
I0330 07:41:24.575]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.575] Mar 30 06:35:45.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.575] STEP: Destroying namespace "dns-2739" for this suite.
I0330 07:41:24.575] •{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":340,"completed":138,"skipped":2166,"failed":0}
I0330 07:41:24.575] SSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.575] ------------------------------
I0330 07:41:24.575] [sig-apps] DisruptionController 
I0330 07:41:24.575]   should observe PodDisruptionBudget status updated [Conformance]
I0330 07:41:24.576]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.576] [BeforeEach] [sig-apps] DisruptionController
... skipping 16 lines ...
I0330 07:41:24.578] STEP: Waiting for all pods to be running
I0330 07:41:24.578] Mar 30 06:35:48.037: INFO: running pods: 0 < 3
I0330 07:41:24.578] [AfterEach] [sig-apps] DisruptionController
I0330 07:41:24.578]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.578] Mar 30 06:35:50.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.578] STEP: Destroying namespace "disruption-5106" for this suite.
I0330 07:41:24.579] •{"msg":"PASSED [sig-apps] DisruptionController should observe PodDisruptionBudget status updated [Conformance]","total":340,"completed":139,"skipped":2189,"failed":0}
I0330 07:41:24.579] S
I0330 07:41:24.579] ------------------------------
I0330 07:41:24.579] [sig-apps] Job 
I0330 07:41:24.579]   should delete a job [Conformance]
I0330 07:41:24.579]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.579] [BeforeEach] [sig-apps] Job
... skipping 31 lines ...
I0330 07:41:24.583] • [SLOW TEST:43.508 seconds]
I0330 07:41:24.583] [sig-apps] Job
I0330 07:41:24.584] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:24.584]   should delete a job [Conformance]
I0330 07:41:24.584]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.584] ------------------------------
I0330 07:41:24.584] {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":340,"completed":140,"skipped":2190,"failed":0}
I0330 07:41:24.584] SSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.584] ------------------------------
I0330 07:41:24.584] [sig-apps] ReplicaSet 
I0330 07:41:24.585]   should serve a basic image on each replica with a public image  [Conformance]
I0330 07:41:24.585]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.585] [BeforeEach] [sig-apps] ReplicaSet
... skipping 25 lines ...
I0330 07:41:24.589] • [SLOW TEST:10.059 seconds]
I0330 07:41:24.589] [sig-apps] ReplicaSet
I0330 07:41:24.589] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:24.589]   should serve a basic image on each replica with a public image  [Conformance]
I0330 07:41:24.589]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.590] ------------------------------
I0330 07:41:24.590] {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":340,"completed":141,"skipped":2213,"failed":0}
I0330 07:41:24.590] SSSSSSSS
I0330 07:41:24.590] ------------------------------
I0330 07:41:24.590] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:24.590]   should mutate custom resource [Conformance]
I0330 07:41:24.590]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.591] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 34 lines ...
I0330 07:41:24.597] • [SLOW TEST:7.047 seconds]
I0330 07:41:24.597] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:24.598] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.598]   should mutate custom resource [Conformance]
I0330 07:41:24.598]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.598] ------------------------------
I0330 07:41:24.598] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":340,"completed":142,"skipped":2221,"failed":0}
I0330 07:41:24.598] SSS
I0330 07:41:24.598] ------------------------------
I0330 07:41:24.599] [sig-network] Services 
I0330 07:41:24.599]   should have session affinity work for NodePort service [LinuxOnly] [Conformance]
I0330 07:41:24.599]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.599] [BeforeEach] [sig-network] Services
... skipping 78 lines ...
I0330 07:41:24.614] • [SLOW TEST:22.606 seconds]
I0330 07:41:24.614] [sig-network] Services
I0330 07:41:24.614] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:24.615]   should have session affinity work for NodePort service [LinuxOnly] [Conformance]
I0330 07:41:24.615]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.615] ------------------------------
I0330 07:41:24.615] {"msg":"PASSED [sig-network] Services should have session affinity work for NodePort service [LinuxOnly] [Conformance]","total":340,"completed":143,"skipped":2224,"failed":0}
I0330 07:41:24.615] SSSSSSS
I0330 07:41:24.615] ------------------------------
I0330 07:41:24.615] [sig-cli] Kubectl client Kubectl cluster-info 
I0330 07:41:24.615]   should check if Kubernetes control plane services is included in cluster-info  [Conformance]
I0330 07:41:24.615]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.616] [BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
I0330 07:41:24.618] Mar 30 06:37:13.419: INFO: stderr: ""
I0330 07:41:24.618] Mar 30 06:37:13.419: INFO: stdout: "\x1b[0;32mKubernetes control plane\x1b[0m is running at \x1b[0;33mhttps://10.96.0.1:443\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
I0330 07:41:24.619] [AfterEach] [sig-cli] Kubectl client
I0330 07:41:24.619]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.619] Mar 30 06:37:13.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.619] STEP: Destroying namespace "kubectl-2807" for this suite.
I0330 07:41:24.619] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes control plane services is included in cluster-info  [Conformance]","total":340,"completed":144,"skipped":2231,"failed":0}
I0330 07:41:24.619] S
I0330 07:41:24.619] ------------------------------
I0330 07:41:24.619] [sig-storage] EmptyDir volumes 
I0330 07:41:24.619]   should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.620]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.620] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:24.621] I0330 06:37:13.449078      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.621] I0330 06:37:13.449103      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.621] I0330 06:37:13.451373      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.621] [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.621]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.622] STEP: Creating a pod to test emptydir 0777 on tmpfs
I0330 07:41:24.622] Mar 30 06:37:13.456: INFO: Waiting up to 5m0s for pod "pod-a04a8a9a-626b-43f2-9bbf-7d9eaed2ae80" in namespace "emptydir-7349" to be "Succeeded or Failed"
I0330 07:41:24.622] Mar 30 06:37:13.462: INFO: Pod "pod-a04a8a9a-626b-43f2-9bbf-7d9eaed2ae80": Phase="Pending", Reason="", readiness=false. Elapsed: 5.724191ms
I0330 07:41:24.622] Mar 30 06:37:15.467: INFO: Pod "pod-a04a8a9a-626b-43f2-9bbf-7d9eaed2ae80": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010578159s
I0330 07:41:24.622] STEP: Saw pod success
I0330 07:41:24.622] Mar 30 06:37:15.467: INFO: Pod "pod-a04a8a9a-626b-43f2-9bbf-7d9eaed2ae80" satisfied condition "Succeeded or Failed"
I0330 07:41:24.622] Mar 30 06:37:15.469: INFO: Trying to get logs from node kind-worker2 pod pod-a04a8a9a-626b-43f2-9bbf-7d9eaed2ae80 container test-container: <nil>
I0330 07:41:24.623] STEP: delete the pod
I0330 07:41:24.623] Mar 30 06:37:15.494: INFO: Waiting for pod pod-a04a8a9a-626b-43f2-9bbf-7d9eaed2ae80 to disappear
I0330 07:41:24.623] Mar 30 06:37:15.496: INFO: Pod pod-a04a8a9a-626b-43f2-9bbf-7d9eaed2ae80 no longer exists
I0330 07:41:24.623] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:24.623]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.623] Mar 30 06:37:15.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.623] STEP: Destroying namespace "emptydir-7349" for this suite.
I0330 07:41:24.624] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":145,"skipped":2232,"failed":0}
I0330 07:41:24.624] SSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.624] ------------------------------
I0330 07:41:24.624] [sig-api-machinery] ResourceQuota 
I0330 07:41:24.624]   should create a ResourceQuota and capture the life of a pod. [Conformance]
I0330 07:41:24.624]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.624] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 29 lines ...
I0330 07:41:24.628] • [SLOW TEST:13.107 seconds]
I0330 07:41:24.628] [sig-api-machinery] ResourceQuota
I0330 07:41:24.628] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.628]   should create a ResourceQuota and capture the life of a pod. [Conformance]
I0330 07:41:24.628]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.628] ------------------------------
I0330 07:41:24.629] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":340,"completed":146,"skipped":2254,"failed":0}
I0330 07:41:24.629] SSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.629] ------------------------------
I0330 07:41:24.629] [sig-api-machinery] ResourceQuota 
I0330 07:41:24.629]   should verify ResourceQuota with terminating scopes. [Conformance]
I0330 07:41:24.629]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.629] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 32 lines ...
I0330 07:41:24.635] • [SLOW TEST:16.137 seconds]
I0330 07:41:24.635] [sig-api-machinery] ResourceQuota
I0330 07:41:24.635] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.635]   should verify ResourceQuota with terminating scopes. [Conformance]
I0330 07:41:24.635]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.636] ------------------------------
I0330 07:41:24.636] {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":340,"completed":147,"skipped":2275,"failed":0}
I0330 07:41:24.636] SSSS
I0330 07:41:24.636] ------------------------------
I0330 07:41:24.636] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
I0330 07:41:24.637]   should have a working scale subresource [Conformance]
I0330 07:41:24.637]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.637] [BeforeEach] [sig-apps] StatefulSet
... skipping 39 lines ...
I0330 07:41:24.644] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:24.644]   Basic StatefulSet functionality [StatefulSetBasic]
I0330 07:41:24.644]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
I0330 07:41:24.644]     should have a working scale subresource [Conformance]
I0330 07:41:24.644]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.645] ------------------------------
I0330 07:41:24.645] {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":340,"completed":148,"skipped":2279,"failed":0}
I0330 07:41:24.645] SSSSSSSSSSSSSSSSS
I0330 07:41:24.645] ------------------------------
I0330 07:41:24.645] [sig-api-machinery] Watchers 
I0330 07:41:24.645]   should be able to start watching from a specific resource version [Conformance]
I0330 07:41:24.645]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.645] [BeforeEach] [sig-api-machinery] Watchers
... skipping 19 lines ...
I0330 07:41:24.649] Mar 30 06:38:14.977: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-658  1d7f97a7-347e-483b-bb6e-fcc6c0f11595 14624 0 2021-03-30 06:38:14 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2021-03-30 06:38:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
I0330 07:41:24.650] Mar 30 06:38:14.977: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-658  1d7f97a7-347e-483b-bb6e-fcc6c0f11595 14625 0 2021-03-30 06:38:14 +0000 UTC <nil> <nil> map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2021-03-30 06:38:14 +0000 UTC FieldsV1 {"f:data":{".":{},"f:mutation":{}},"f:metadata":{"f:labels":{".":{},"f:watch-this-configmap":{}}}}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
I0330 07:41:24.650] [AfterEach] [sig-api-machinery] Watchers
I0330 07:41:24.650]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.650] Mar 30 06:38:14.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.650] STEP: Destroying namespace "watch-658" for this suite.
I0330 07:41:24.651] •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":340,"completed":149,"skipped":2296,"failed":0}
I0330 07:41:24.651] SSS
I0330 07:41:24.651] ------------------------------
I0330 07:41:24.651] [sig-node] Secrets 
I0330 07:41:24.651]   should fail to create secret due to empty secret key [Conformance]
I0330 07:41:24.651]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.651] [BeforeEach] [sig-node] Secrets
I0330 07:41:24.651]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:24.652] STEP: Creating a kubernetes client
I0330 07:41:24.652] Mar 30 06:38:14.983: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:24.652] STEP: Building a namespace api object, basename secrets
I0330 07:41:24.652] I0330 06:38:14.990668      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.652] I0330 06:38:14.990701      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.652] STEP: Waiting for a default service account to be provisioned in namespace
I0330 07:41:24.653] I0330 06:38:15.004444      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.653] I0330 06:38:15.004566      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.653] I0330 06:38:15.004581      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.653] I0330 06:38:15.007090      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.653] [It] should fail to create secret due to empty secret key [Conformance]
I0330 07:41:24.654]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.654] STEP: Creating projection with secret that has name secret-emptykey-test-99141b91-46ec-402f-b70a-fce18f92c8d7
I0330 07:41:24.654] [AfterEach] [sig-node] Secrets
I0330 07:41:24.654]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.654] Mar 30 06:38:15.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.655] STEP: Destroying namespace "secrets-9127" for this suite.
I0330 07:41:24.655] •{"msg":"PASSED [sig-node] Secrets should fail to create secret due to empty secret key [Conformance]","total":340,"completed":150,"skipped":2299,"failed":0}
I0330 07:41:24.655] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.655] ------------------------------
I0330 07:41:24.655] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:24.656]   should mutate custom resource with pruning [Conformance]
I0330 07:41:24.656]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.656] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 34 lines ...
I0330 07:41:24.663] • [SLOW TEST:6.739 seconds]
I0330 07:41:24.663] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:24.663] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.664]   should mutate custom resource with pruning [Conformance]
I0330 07:41:24.664]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.664] ------------------------------
I0330 07:41:24.665] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":340,"completed":151,"skipped":2330,"failed":0}
I0330 07:41:24.665] SSSSSSSSSSS
I0330 07:41:24.665] ------------------------------
I0330 07:41:24.665] [sig-storage] Projected configMap 
I0330 07:41:24.665]   updates should be reflected in volume [NodeConformance] [Conformance]
I0330 07:41:24.666]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.666] [BeforeEach] [sig-storage] Projected configMap
... skipping 17 lines ...
I0330 07:41:24.669] STEP: Updating configmap projected-configmap-test-upd-1bffd787-ec43-460a-b336-752212b5cc7c
I0330 07:41:24.669] STEP: waiting to observe update in volume
I0330 07:41:24.669] [AfterEach] [sig-storage] Projected configMap
I0330 07:41:24.669]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.669] Mar 30 06:38:25.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.669] STEP: Destroying namespace "projected-6856" for this suite.
I0330 07:41:24.669] •{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":340,"completed":152,"skipped":2341,"failed":0}
I0330 07:41:24.669] 
I0330 07:41:24.670] ------------------------------
I0330 07:41:24.670] [sig-node] InitContainer [NodeConformance] 
I0330 07:41:24.670]   should invoke init containers on a RestartNever pod [Conformance]
I0330 07:41:24.670]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.670] [BeforeEach] [sig-node] InitContainer [NodeConformance]
... skipping 18 lines ...
I0330 07:41:24.673] [AfterEach] [sig-node] InitContainer [NodeConformance]
I0330 07:41:24.673]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.673] I0330 06:38:29.253066      19 retrywatcher.go:147] "Stopping RetryWatcher."
I0330 07:41:24.674] Mar 30 06:38:29.253: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.674] I0330 06:38:29.253189      19 retrywatcher.go:275] Stopping RetryWatcher.
I0330 07:41:24.674] STEP: Destroying namespace "init-container-3906" for this suite.
I0330 07:41:24.675] •{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":340,"completed":153,"skipped":2341,"failed":0}
I0330 07:41:24.675] SSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.675] ------------------------------
I0330 07:41:24.675] [sig-network] Services 
I0330 07:41:24.675]   should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
I0330 07:41:24.675]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.675] [BeforeEach] [sig-network] Services
... skipping 92 lines ...
I0330 07:41:24.693] • [SLOW TEST:44.100 seconds]
I0330 07:41:24.693] [sig-network] Services
I0330 07:41:24.694] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:24.694]   should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]
I0330 07:41:24.694]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.694] ------------------------------
I0330 07:41:24.694] {"msg":"PASSED [sig-network] Services should have session affinity timeout work for NodePort service [LinuxOnly] [Conformance]","total":340,"completed":154,"skipped":2365,"failed":0}
I0330 07:41:24.694] SSS
I0330 07:41:24.694] ------------------------------
I0330 07:41:24.695] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:24.695]   should mutate custom resource with different stored version [Conformance]
I0330 07:41:24.695]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.695] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 36 lines ...
I0330 07:41:24.702] • [SLOW TEST:7.083 seconds]
I0330 07:41:24.702] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:24.702] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.703]   should mutate custom resource with different stored version [Conformance]
I0330 07:41:24.703]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.703] ------------------------------
I0330 07:41:24.703] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":340,"completed":155,"skipped":2368,"failed":0}
I0330 07:41:24.703] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.703] ------------------------------
I0330 07:41:24.703] [sig-storage] ConfigMap 
I0330 07:41:24.703]   should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.704]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.704] [BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
I0330 07:41:24.705] I0330 06:39:20.496074      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.706] I0330 06:39:20.502516      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.706] [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.706]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.706] STEP: Creating configMap with name configmap-test-volume-7fd8a9ab-869d-4647-adf2-c05e15967991
I0330 07:41:24.706] STEP: Creating a pod to test consume configMaps
I0330 07:41:24.706] Mar 30 06:39:20.519: INFO: Waiting up to 5m0s for pod "pod-configmaps-0c4569ef-8632-48bc-aea9-71f0934da58d" in namespace "configmap-2455" to be "Succeeded or Failed"
I0330 07:41:24.706] Mar 30 06:39:20.522: INFO: Pod "pod-configmaps-0c4569ef-8632-48bc-aea9-71f0934da58d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.896586ms
I0330 07:41:24.707] Mar 30 06:39:22.530: INFO: Pod "pod-configmaps-0c4569ef-8632-48bc-aea9-71f0934da58d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010879057s
I0330 07:41:24.707] STEP: Saw pod success
I0330 07:41:24.707] Mar 30 06:39:22.531: INFO: Pod "pod-configmaps-0c4569ef-8632-48bc-aea9-71f0934da58d" satisfied condition "Succeeded or Failed"
I0330 07:41:24.707] Mar 30 06:39:22.536: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-0c4569ef-8632-48bc-aea9-71f0934da58d container agnhost-container: <nil>
I0330 07:41:24.707] STEP: delete the pod
I0330 07:41:24.707] Mar 30 06:39:22.552: INFO: Waiting for pod pod-configmaps-0c4569ef-8632-48bc-aea9-71f0934da58d to disappear
I0330 07:41:24.707] Mar 30 06:39:22.555: INFO: Pod pod-configmaps-0c4569ef-8632-48bc-aea9-71f0934da58d no longer exists
I0330 07:41:24.708] [AfterEach] [sig-storage] ConfigMap
I0330 07:41:24.708]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.708] Mar 30 06:39:22.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.708] STEP: Destroying namespace "configmap-2455" for this suite.
I0330 07:41:24.708] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":156,"skipped":2417,"failed":0}
I0330 07:41:24.708] SSSSSSSSSSSS
I0330 07:41:24.708] ------------------------------
I0330 07:41:24.708] [sig-apps] ReplicationController 
I0330 07:41:24.709]   should test the lifecycle of a ReplicationController [Conformance]
I0330 07:41:24.709]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.709] [BeforeEach] [sig-apps] ReplicationController
... skipping 33 lines ...
I0330 07:41:24.713] STEP: deleting ReplicationControllers by collection
I0330 07:41:24.713] STEP: waiting for ReplicationController to have a DELETED watchEvent
I0330 07:41:24.713] [AfterEach] [sig-apps] ReplicationController
I0330 07:41:24.713]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.714] Mar 30 06:39:24.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.714] STEP: Destroying namespace "replication-controller-2416" for this suite.
I0330 07:41:24.714] •{"msg":"PASSED [sig-apps] ReplicationController should test the lifecycle of a ReplicationController [Conformance]","total":340,"completed":157,"skipped":2429,"failed":0}
I0330 07:41:24.714] SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.714] ------------------------------
I0330 07:41:24.714] [sig-api-machinery] Garbage collector 
I0330 07:41:24.714]   should delete pods created by rc when not orphaning [Conformance]
I0330 07:41:24.714]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.714] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 12 lines ...
I0330 07:41:24.717] I0330 06:39:24.952708      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.717] STEP: create the rc
I0330 07:41:24.717] STEP: delete the rc
I0330 07:41:24.717] STEP: wait for all pods to be garbage collected
I0330 07:41:24.717] STEP: Gathering metrics
I0330 07:41:24.717] W0330 06:39:34.990582      19 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
I0330 07:41:24.718] Mar 30 06:40:37.009: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
I0330 07:41:24.718] [AfterEach] [sig-api-machinery] Garbage collector
I0330 07:41:24.718]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.718] Mar 30 06:40:37.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.718] STEP: Destroying namespace "gc-8193" for this suite.
I0330 07:41:24.718] 
I0330 07:41:24.719] • [SLOW TEST:72.096 seconds]
I0330 07:41:24.719] [sig-api-machinery] Garbage collector
I0330 07:41:24.719] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.719]   should delete pods created by rc when not orphaning [Conformance]
I0330 07:41:24.719]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.720] ------------------------------
I0330 07:41:24.720] {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":340,"completed":158,"skipped":2458,"failed":0}
I0330 07:41:24.720] SSS
I0330 07:41:24.720] ------------------------------
I0330 07:41:24.720] [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] 
I0330 07:41:24.721]   should perform rolling updates and roll backs of template modifications [Conformance]
I0330 07:41:24.721]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.721] [BeforeEach] [sig-apps] StatefulSet
... skipping 68 lines ...
I0330 07:41:24.731] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:24.731]   Basic StatefulSet functionality [StatefulSetBasic]
I0330 07:41:24.732]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:95
I0330 07:41:24.732]     should perform rolling updates and roll backs of template modifications [Conformance]
I0330 07:41:24.732]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.732] ------------------------------
I0330 07:41:24.732] {"msg":"PASSED [sig-apps] StatefulSet Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":340,"completed":159,"skipped":2461,"failed":0}
I0330 07:41:24.732] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.732] ------------------------------
I0330 07:41:24.732] [sig-api-machinery] server version 
I0330 07:41:24.733]   should find the server version [Conformance]
I0330 07:41:24.733]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.733] [BeforeEach] [sig-api-machinery] server version
... skipping 17 lines ...
I0330 07:41:24.737] Mar 30 06:42:58.000: INFO: cleanMinorVersion: 22
I0330 07:41:24.737] Mar 30 06:42:58.000: INFO: Minor version: 22+
I0330 07:41:24.737] [AfterEach] [sig-api-machinery] server version
I0330 07:41:24.737]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.737] Mar 30 06:42:58.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.737] STEP: Destroying namespace "server-version-2636" for this suite.
I0330 07:41:24.737] •{"msg":"PASSED [sig-api-machinery] server version should find the server version [Conformance]","total":340,"completed":160,"skipped":2497,"failed":0}
I0330 07:41:24.738] SSSSSSSSSSSS
I0330 07:41:24.738] ------------------------------
I0330 07:41:24.738] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:24.738]   should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
I0330 07:41:24.738]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.738] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 30 lines ...
I0330 07:41:24.742]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.742] Mar 30 06:43:01.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.743] STEP: Destroying namespace "webhook-8122" for this suite.
I0330 07:41:24.743] STEP: Destroying namespace "webhook-8122-markers" for this suite.
I0330 07:41:24.743] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:24.743]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0330 07:41:24.743] •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":340,"completed":161,"skipped":2509,"failed":0}
I0330 07:41:24.743] SSSSSSSSSSSSSSSSSSS
I0330 07:41:24.743] ------------------------------
I0330 07:41:24.743] [sig-storage] ConfigMap 
I0330 07:41:24.744]   should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.744]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.744] [BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
I0330 07:41:24.745] I0330 06:43:01.629824      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.745] [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.746]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.746] STEP: Creating configMap with name configmap-test-volume-map-06723f28-9f76-4afd-b963-bd4183758187
I0330 07:41:24.746] I0330 06:43:01.633319      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.746] STEP: Creating a pod to test consume configMaps
I0330 07:41:24.746] Mar 30 06:43:01.645: INFO: Waiting up to 5m0s for pod "pod-configmaps-fc93360c-9e4b-4837-b5b8-98f630830cb4" in namespace "configmap-5887" to be "Succeeded or Failed"
I0330 07:41:24.746] Mar 30 06:43:01.649: INFO: Pod "pod-configmaps-fc93360c-9e4b-4837-b5b8-98f630830cb4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.831172ms
I0330 07:41:24.747] Mar 30 06:43:03.655: INFO: Pod "pod-configmaps-fc93360c-9e4b-4837-b5b8-98f630830cb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009590253s
I0330 07:41:24.747] STEP: Saw pod success
I0330 07:41:24.747] Mar 30 06:43:03.655: INFO: Pod "pod-configmaps-fc93360c-9e4b-4837-b5b8-98f630830cb4" satisfied condition "Succeeded or Failed"
I0330 07:41:24.747] Mar 30 06:43:03.657: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-fc93360c-9e4b-4837-b5b8-98f630830cb4 container agnhost-container: <nil>
I0330 07:41:24.747] STEP: delete the pod
I0330 07:41:24.747] Mar 30 06:43:03.680: INFO: Waiting for pod pod-configmaps-fc93360c-9e4b-4837-b5b8-98f630830cb4 to disappear
I0330 07:41:24.747] Mar 30 06:43:03.683: INFO: Pod pod-configmaps-fc93360c-9e4b-4837-b5b8-98f630830cb4 no longer exists
I0330 07:41:24.748] [AfterEach] [sig-storage] ConfigMap
I0330 07:41:24.748]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.748] Mar 30 06:43:03.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.748] STEP: Destroying namespace "configmap-5887" for this suite.
I0330 07:41:24.748] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":162,"skipped":2528,"failed":0}
I0330 07:41:24.748] SSSS
I0330 07:41:24.748] ------------------------------
I0330 07:41:24.749] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:24.749]   should be able to deny pod and configmap creation [Conformance]
I0330 07:41:24.749]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.749] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 40 lines ...
I0330 07:41:24.755] • [SLOW TEST:13.968 seconds]
I0330 07:41:24.755] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:24.755] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.756]   should be able to deny pod and configmap creation [Conformance]
I0330 07:41:24.756]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.756] ------------------------------
I0330 07:41:24.756] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":340,"completed":163,"skipped":2532,"failed":0}
I0330 07:41:24.757] SSSSSSSSSS
I0330 07:41:24.757] ------------------------------
I0330 07:41:24.757] [sig-network] Services 
I0330 07:41:24.757]   should be able to create a functioning NodePort service [Conformance]
I0330 07:41:24.757]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.757] [BeforeEach] [sig-network] Services
... skipping 69 lines ...
I0330 07:41:24.771] • [SLOW TEST:14.355 seconds]
I0330 07:41:24.771] [sig-network] Services
I0330 07:41:24.771] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:24.771]   should be able to create a functioning NodePort service [Conformance]
I0330 07:41:24.771]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.771] ------------------------------
I0330 07:41:24.772] {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":340,"completed":164,"skipped":2542,"failed":0}
I0330 07:41:24.772] SS
I0330 07:41:24.772] ------------------------------
I0330 07:41:24.772] [sig-storage] EmptyDir wrapper volumes 
I0330 07:41:24.772]   should not conflict [Conformance]
I0330 07:41:24.772]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.772] [BeforeEach] [sig-storage] EmptyDir wrapper volumes
... skipping 16 lines ...
I0330 07:41:24.776] STEP: Cleaning up the configmap
I0330 07:41:24.776] STEP: Cleaning up the pod
I0330 07:41:24.776] [AfterEach] [sig-storage] EmptyDir wrapper volumes
I0330 07:41:24.777]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.777] Mar 30 06:43:34.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.777] STEP: Destroying namespace "emptydir-wrapper-4681" for this suite.
I0330 07:41:24.777] •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":340,"completed":165,"skipped":2544,"failed":0}
I0330 07:41:24.777] SSSSSSSSSSSSSSSS
I0330 07:41:24.777] ------------------------------
I0330 07:41:24.777] [sig-instrumentation] Events 
I0330 07:41:24.778]   should delete a collection of events [Conformance]
I0330 07:41:24.778]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.778] [BeforeEach] [sig-instrumentation] Events
... skipping 20 lines ...
I0330 07:41:24.781] STEP: check that the list of events matches the requested quantity
I0330 07:41:24.781] Mar 30 06:43:34.149: INFO: requesting list of events to confirm quantity
I0330 07:41:24.781] [AfterEach] [sig-instrumentation] Events
I0330 07:41:24.781]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.781] Mar 30 06:43:34.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.782] STEP: Destroying namespace "events-913" for this suite.
I0330 07:41:24.782] •{"msg":"PASSED [sig-instrumentation] Events should delete a collection of events [Conformance]","total":340,"completed":166,"skipped":2560,"failed":0}
I0330 07:41:24.782] SSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.782] ------------------------------
I0330 07:41:24.782] [sig-cli] Kubectl client Proxy server 
I0330 07:41:24.782]   should support --unix-socket=/path  [Conformance]
I0330 07:41:24.782]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.782] [BeforeEach] [sig-cli] Kubectl client
... skipping 16 lines ...
I0330 07:41:24.785] Mar 30 06:43:34.187: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/tmp/kubeconfig-963336331 --namespace=kubectl-6980 proxy --unix-socket=/tmp/kubectl-proxy-unix923725790/test'
I0330 07:41:24.785] STEP: retrieving proxy /api/ output
I0330 07:41:24.785] [AfterEach] [sig-cli] Kubectl client
I0330 07:41:24.785]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.785] Mar 30 06:43:34.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.785] STEP: Destroying namespace "kubectl-6980" for this suite.
I0330 07:41:24.786] •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":340,"completed":167,"skipped":2582,"failed":0}
I0330 07:41:24.786] SS
I0330 07:41:24.786] ------------------------------
I0330 07:41:24.786] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] 
I0330 07:41:24.786]   should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
I0330 07:41:24.786]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.786] [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
... skipping 12 lines ...
I0330 07:41:24.788] [BeforeEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
I0330 07:41:24.788]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/sysctl.go:64
I0330 07:41:24.788] I0330 06:43:34.305783      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.788] [It] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]
I0330 07:41:24.789]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.789] STEP: Creating a pod with the kernel.shm_rmid_forced sysctl
I0330 07:41:24.789] STEP: Watching for error events or started pod
I0330 07:41:24.789] STEP: Waiting for pod completion
I0330 07:41:24.789] STEP: Checking that the pod succeeded
I0330 07:41:24.789] STEP: Getting logs from the pod
I0330 07:41:24.789] STEP: Checking that the sysctl is actually updated
I0330 07:41:24.789] [AfterEach] [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls]
I0330 07:41:24.789]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.790] Mar 30 06:43:36.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.790] STEP: Destroying namespace "sysctl-5769" for this suite.
I0330 07:41:24.790] •{"msg":"PASSED [sig-node] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls [MinimumKubeletVersion:1.21] [Conformance]","total":340,"completed":168,"skipped":2584,"failed":0}
I0330 07:41:24.790] SSSSSSSSSSS
I0330 07:41:24.790] ------------------------------
I0330 07:41:24.790] [sig-network] DNS 
I0330 07:41:24.791]   should provide DNS for pods for Subdomain [Conformance]
I0330 07:41:24.791]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.791] [BeforeEach] [sig-network] DNS
... skipping 24 lines ...
I0330 07:41:24.798] Mar 30 06:43:38.411: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.798] Mar 30 06:43:38.414: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.799] Mar 30 06:43:38.421: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.799] Mar 30 06:43:38.424: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.800] Mar 30 06:43:38.426: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.800] Mar 30 06:43:38.429: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.801] Mar 30 06:43:38.433: INFO: Lookups using dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local]
I0330 07:41:24.801] 
I0330 07:41:24.801] Mar 30 06:43:43.437: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.801] Mar 30 06:43:43.440: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.802] Mar 30 06:43:43.442: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.802] Mar 30 06:43:43.444: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.802] Mar 30 06:43:43.452: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.802] Mar 30 06:43:43.454: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.803] Mar 30 06:43:43.457: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.803] Mar 30 06:43:43.459: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.804] Mar 30 06:43:43.464: INFO: Lookups using dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local]
I0330 07:41:24.804] 
I0330 07:41:24.804] Mar 30 06:43:48.437: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.804] Mar 30 06:43:48.440: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.805] Mar 30 06:43:48.443: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.805] Mar 30 06:43:48.445: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.805] Mar 30 06:43:48.452: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.805] Mar 30 06:43:48.455: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.806] Mar 30 06:43:48.458: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.806] Mar 30 06:43:48.460: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.806] Mar 30 06:43:48.465: INFO: Lookups using dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local]
I0330 07:41:24.806] 
I0330 07:41:24.807] Mar 30 06:43:53.438: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.807] Mar 30 06:43:53.442: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.807] Mar 30 06:43:53.444: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.808] Mar 30 06:43:53.446: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.808] Mar 30 06:43:53.454: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.808] Mar 30 06:43:53.457: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.809] Mar 30 06:43:53.459: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.809] Mar 30 06:43:53.461: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.809] Mar 30 06:43:53.467: INFO: Lookups using dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local]
I0330 07:41:24.810] 
I0330 07:41:24.810] Mar 30 06:43:58.437: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.810] Mar 30 06:43:58.440: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.810] Mar 30 06:43:58.442: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.811] Mar 30 06:43:58.445: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.811] Mar 30 06:43:58.452: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.811] Mar 30 06:43:58.454: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.811] Mar 30 06:43:58.457: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.812] Mar 30 06:43:58.460: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.812] Mar 30 06:43:58.464: INFO: Lookups using dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local]
I0330 07:41:24.812] 
I0330 07:41:24.813] Mar 30 06:44:03.437: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.813] Mar 30 06:44:03.440: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.813] Mar 30 06:44:03.442: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.813] Mar 30 06:44:03.445: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.814] Mar 30 06:44:03.452: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.814] Mar 30 06:44:03.454: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.814] Mar 30 06:44:03.457: INFO: Unable to read jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.814] Mar 30 06:44:03.459: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local from pod dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284: the server could not find the requested resource (get pods dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284)
I0330 07:41:24.815] Mar 30 06:44:03.463: INFO: Lookups using dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local wheezy_udp@dns-test-service-2.dns-9868.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-9868.svc.cluster.local jessie_udp@dns-test-service-2.dns-9868.svc.cluster.local jessie_tcp@dns-test-service-2.dns-9868.svc.cluster.local]
I0330 07:41:24.815] 
I0330 07:41:24.815] Mar 30 06:44:08.468: INFO: DNS probes using dns-9868/dns-test-ad6660c3-1b7f-43ec-b20d-46fafe20f284 succeeded
I0330 07:41:24.815] 
I0330 07:41:24.815] STEP: deleting the pod
I0330 07:41:24.816] STEP: deleting the test headless service
I0330 07:41:24.816] [AfterEach] [sig-network] DNS
... skipping 4 lines ...
I0330 07:41:24.816] • [SLOW TEST:32.172 seconds]
I0330 07:41:24.816] [sig-network] DNS
I0330 07:41:24.816] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:24.816]   should provide DNS for pods for Subdomain [Conformance]
I0330 07:41:24.817]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.817] ------------------------------
I0330 07:41:24.817] {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":340,"completed":169,"skipped":2595,"failed":0}
I0330 07:41:24.817] SSSSSSS
I0330 07:41:24.817] ------------------------------
I0330 07:41:24.817] [sig-storage] Subpath Atomic writer volumes 
I0330 07:41:24.817]   should support subpaths with downward pod [LinuxOnly] [Conformance]
I0330 07:41:24.817]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.817] [BeforeEach] [sig-storage] Subpath
... skipping 12 lines ...
I0330 07:41:24.819] STEP: Setting up data
I0330 07:41:24.819] I0330 06:44:08.550502      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.819] [It] should support subpaths with downward pod [LinuxOnly] [Conformance]
I0330 07:41:24.820]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.820] STEP: Creating pod pod-subpath-test-downwardapi-hf68
I0330 07:41:24.820] STEP: Creating a pod to test atomic-volume-subpath
I0330 07:41:24.820] Mar 30 06:44:08.564: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hf68" in namespace "subpath-7402" to be "Succeeded or Failed"
I0330 07:41:24.820] Mar 30 06:44:08.571: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.677956ms
I0330 07:41:24.820] Mar 30 06:44:10.578: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Running", Reason="", readiness=true. Elapsed: 2.013892166s
I0330 07:41:24.820] Mar 30 06:44:12.586: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Running", Reason="", readiness=true. Elapsed: 4.021901247s
I0330 07:41:24.821] Mar 30 06:44:14.594: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Running", Reason="", readiness=true. Elapsed: 6.029945531s
I0330 07:41:24.821] Mar 30 06:44:16.599: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Running", Reason="", readiness=true. Elapsed: 8.035052471s
I0330 07:41:24.821] Mar 30 06:44:18.606: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Running", Reason="", readiness=true. Elapsed: 10.042238481s
I0330 07:41:24.821] Mar 30 06:44:20.613: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Running", Reason="", readiness=true. Elapsed: 12.048609479s
I0330 07:41:24.821] Mar 30 06:44:22.620: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Running", Reason="", readiness=true. Elapsed: 14.055726229s
I0330 07:41:24.821] Mar 30 06:44:24.626: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Running", Reason="", readiness=true. Elapsed: 16.062440621s
I0330 07:41:24.821] Mar 30 06:44:26.631: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Running", Reason="", readiness=true. Elapsed: 18.067365176s
I0330 07:41:24.822] Mar 30 06:44:28.638: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Running", Reason="", readiness=true. Elapsed: 20.074359522s
I0330 07:41:24.822] Mar 30 06:44:30.645: INFO: Pod "pod-subpath-test-downwardapi-hf68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.080879958s
I0330 07:41:24.822] STEP: Saw pod success
I0330 07:41:24.822] Mar 30 06:44:30.645: INFO: Pod "pod-subpath-test-downwardapi-hf68" satisfied condition "Succeeded or Failed"
I0330 07:41:24.822] Mar 30 06:44:30.648: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-downwardapi-hf68 container test-container-subpath-downwardapi-hf68: <nil>
I0330 07:41:24.822] STEP: delete the pod
I0330 07:41:24.822] Mar 30 06:44:30.665: INFO: Waiting for pod pod-subpath-test-downwardapi-hf68 to disappear
I0330 07:41:24.823] Mar 30 06:44:30.668: INFO: Pod pod-subpath-test-downwardapi-hf68 no longer exists
I0330 07:41:24.823] STEP: Deleting pod pod-subpath-test-downwardapi-hf68
I0330 07:41:24.823] Mar 30 06:44:30.668: INFO: Deleting pod "pod-subpath-test-downwardapi-hf68" in namespace "subpath-7402"
... skipping 7 lines ...
I0330 07:41:24.824] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0330 07:41:24.824]   Atomic writer volumes
I0330 07:41:24.824]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
I0330 07:41:24.824]     should support subpaths with downward pod [LinuxOnly] [Conformance]
I0330 07:41:24.824]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.824] ------------------------------
I0330 07:41:24.824] {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":340,"completed":170,"skipped":2602,"failed":0}
I0330 07:41:24.825] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.825] ------------------------------
I0330 07:41:24.825] [sig-api-machinery] ResourceQuota 
I0330 07:41:24.825]   should be able to update and delete ResourceQuota. [Conformance]
I0330 07:41:24.825]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.825] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 17 lines ...
I0330 07:41:24.827] STEP: Deleting a ResourceQuota
I0330 07:41:24.827] STEP: Verifying the deleted ResourceQuota
I0330 07:41:24.827] [AfterEach] [sig-api-machinery] ResourceQuota
I0330 07:41:24.828]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.828] Mar 30 06:44:30.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.828] STEP: Destroying namespace "resourcequota-7448" for this suite.
I0330 07:41:24.828] •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":340,"completed":171,"skipped":2634,"failed":0}
I0330 07:41:24.828] SSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.828] ------------------------------
I0330 07:41:24.828] [sig-node] NoExecuteTaintManager Single Pod [Serial] 
I0330 07:41:24.828]   removing taint cancels eviction [Disruptive] [Conformance]
I0330 07:41:24.829]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.829] [BeforeEach] [sig-node] NoExecuteTaintManager Single Pod [Serial]
... skipping 35 lines ...
I0330 07:41:24.834] • [SLOW TEST:135.301 seconds]
I0330 07:41:24.834] [sig-node] NoExecuteTaintManager Single Pod [Serial]
I0330 07:41:24.834] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
I0330 07:41:24.834]   removing taint cancels eviction [Disruptive] [Conformance]
I0330 07:41:24.835]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.835] ------------------------------
I0330 07:41:24.835] {"msg":"PASSED [sig-node] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Disruptive] [Conformance]","total":340,"completed":172,"skipped":2656,"failed":0}
I0330 07:41:24.835] S
I0330 07:41:24.835] ------------------------------
I0330 07:41:24.835] [sig-node] PodTemplates 
I0330 07:41:24.835]   should run the lifecycle of PodTemplates [Conformance]
I0330 07:41:24.835]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.835] [BeforeEach] [sig-node] PodTemplates
... skipping 11 lines ...
I0330 07:41:24.837]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.837] I0330 06:46:46.068964      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.838] [AfterEach] [sig-node] PodTemplates
I0330 07:41:24.838]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.838] Mar 30 06:46:46.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.838] STEP: Destroying namespace "podtemplate-347" for this suite.
I0330 07:41:24.838] •{"msg":"PASSED [sig-node] PodTemplates should run the lifecycle of PodTemplates [Conformance]","total":340,"completed":173,"skipped":2657,"failed":0}
I0330 07:41:24.838] SSSSSSSS
I0330 07:41:24.838] ------------------------------
I0330 07:41:24.838] [sig-storage] Projected configMap 
I0330 07:41:24.838]   should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
I0330 07:41:24.839]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.839] [BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
I0330 07:41:24.841] I0330 06:46:46.123345      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.841] [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
I0330 07:41:24.841]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.841] I0330 06:46:46.126394      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.841] STEP: Creating configMap with name projected-configmap-test-volume-06565380-a628-472d-b133-73c1a1def5fa
I0330 07:41:24.842] STEP: Creating a pod to test consume configMaps
I0330 07:41:24.842] Mar 30 06:46:46.136: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8b49fbde-d563-4fea-9714-12e8174c811c" in namespace "projected-7002" to be "Succeeded or Failed"
I0330 07:41:24.842] Mar 30 06:46:46.141: INFO: Pod "pod-projected-configmaps-8b49fbde-d563-4fea-9714-12e8174c811c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.487713ms
I0330 07:41:24.842] Mar 30 06:46:48.147: INFO: Pod "pod-projected-configmaps-8b49fbde-d563-4fea-9714-12e8174c811c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010797597s
I0330 07:41:24.842] STEP: Saw pod success
I0330 07:41:24.842] Mar 30 06:46:48.147: INFO: Pod "pod-projected-configmaps-8b49fbde-d563-4fea-9714-12e8174c811c" satisfied condition "Succeeded or Failed"
I0330 07:41:24.842] Mar 30 06:46:48.150: INFO: Trying to get logs from node kind-worker pod pod-projected-configmaps-8b49fbde-d563-4fea-9714-12e8174c811c container projected-configmap-volume-test: <nil>
I0330 07:41:24.843] STEP: delete the pod
I0330 07:41:24.843] Mar 30 06:46:48.173: INFO: Waiting for pod pod-projected-configmaps-8b49fbde-d563-4fea-9714-12e8174c811c to disappear
I0330 07:41:24.843] Mar 30 06:46:48.184: INFO: Pod pod-projected-configmaps-8b49fbde-d563-4fea-9714-12e8174c811c no longer exists
I0330 07:41:24.843] [AfterEach] [sig-storage] Projected configMap
I0330 07:41:24.843]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.844] Mar 30 06:46:48.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.844] STEP: Destroying namespace "projected-7002" for this suite.
I0330 07:41:24.844] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":340,"completed":174,"skipped":2665,"failed":0}
I0330 07:41:24.844] 
I0330 07:41:24.844] ------------------------------
I0330 07:41:24.844] [sig-api-machinery] Namespaces [Serial] 
I0330 07:41:24.845]   should ensure that all pods are removed when a namespace is deleted [Conformance]
I0330 07:41:24.845]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.845] [BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 38 lines ...
I0330 07:41:24.853] • [SLOW TEST:29.123 seconds]
I0330 07:41:24.853] [sig-api-machinery] Namespaces [Serial]
I0330 07:41:24.853] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.853]   should ensure that all pods are removed when a namespace is deleted [Conformance]
I0330 07:41:24.853]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.854] ------------------------------
I0330 07:41:24.854] {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":340,"completed":175,"skipped":2665,"failed":0}
I0330 07:41:24.854] S
I0330 07:41:24.854] ------------------------------
I0330 07:41:24.854] [sig-node] Security Context When creating a pod with privileged 
I0330 07:41:24.854]   should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.854]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.854] [BeforeEach] [sig-node] Security Context
... skipping 9 lines ...
I0330 07:41:24.856] I0330 06:47:17.340381      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.856] [BeforeEach] [sig-node] Security Context
I0330 07:41:24.857]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
I0330 07:41:24.857] [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.857]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.857] I0330 06:47:17.342678      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.858] Mar 30 06:47:17.349: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-c228b930-9b10-4307-a4ee-5019e986fece" in namespace "security-context-test-4463" to be "Succeeded or Failed"
I0330 07:41:24.858] Mar 30 06:47:17.351: INFO: Pod "busybox-privileged-false-c228b930-9b10-4307-a4ee-5019e986fece": Phase="Pending", Reason="", readiness=false. Elapsed: 2.263139ms
I0330 07:41:24.858] Mar 30 06:47:19.357: INFO: Pod "busybox-privileged-false-c228b930-9b10-4307-a4ee-5019e986fece": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008121219s
I0330 07:41:24.858] Mar 30 06:47:19.357: INFO: Pod "busybox-privileged-false-c228b930-9b10-4307-a4ee-5019e986fece" satisfied condition "Succeeded or Failed"
I0330 07:41:24.859] Mar 30 06:47:19.369: INFO: Got logs for pod "busybox-privileged-false-c228b930-9b10-4307-a4ee-5019e986fece": "ip: RTNETLINK answers: Operation not permitted\n"
I0330 07:41:24.859] [AfterEach] [sig-node] Security Context
I0330 07:41:24.859]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.859] Mar 30 06:47:19.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.860] STEP: Destroying namespace "security-context-test-4463" for this suite.
I0330 07:41:24.860] •{"msg":"PASSED [sig-node] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":176,"skipped":2666,"failed":0}
I0330 07:41:24.860] SSSSSS
I0330 07:41:24.860] ------------------------------
I0330 07:41:24.860] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
I0330 07:41:24.861]   creating/deleting custom resource definition objects works  [Conformance]
I0330 07:41:24.861]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.861] [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
I0330 07:41:24.864]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.864] Mar 30 06:47:19.413: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:24.864] [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
I0330 07:41:24.864]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.864] Mar 30 06:47:20.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.865] STEP: Destroying namespace "custom-resource-definition-2928" for this suite.
I0330 07:41:24.865] •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":340,"completed":177,"skipped":2672,"failed":0}
I0330 07:41:24.865] SSSSSSSSSSSSSSSSSSS
I0330 07:41:24.865] ------------------------------
I0330 07:41:24.865] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod 
I0330 07:41:24.865]   should have an terminated reason [NodeConformance] [Conformance]
I0330 07:41:24.865]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.866] [BeforeEach] [sig-node] Kubelet
... skipping 15 lines ...
I0330 07:41:24.868] [It] should have an terminated reason [NodeConformance] [Conformance]
I0330 07:41:24.868]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.868] [AfterEach] [sig-node] Kubelet
I0330 07:41:24.868]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.868] Mar 30 06:47:24.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.868] STEP: Destroying namespace "kubelet-test-5835" for this suite.
I0330 07:41:24.869] •{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":340,"completed":178,"skipped":2691,"failed":0}
I0330 07:41:24.869] SSSSSSS
I0330 07:41:24.869] ------------------------------
I0330 07:41:24.869] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:24.869]   should be able to deny attaching pod [Conformance]
I0330 07:41:24.869]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.869] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 36 lines ...
I0330 07:41:24.875] • [SLOW TEST:6.061 seconds]
I0330 07:41:24.875] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:24.875] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.875]   should be able to deny attaching pod [Conformance]
I0330 07:41:24.875]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.875] ------------------------------
I0330 07:41:24.875] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":340,"completed":179,"skipped":2698,"failed":0}
I0330 07:41:24.876] SSSSSSSSSSS
I0330 07:41:24.876] ------------------------------
I0330 07:41:24.876] [sig-api-machinery] Garbage collector 
I0330 07:41:24.876]   should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
I0330 07:41:24.876]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.876] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 13 lines ...
I0330 07:41:24.878] STEP: create the deployment
I0330 07:41:24.878] STEP: Wait for the Deployment to create new ReplicaSet
I0330 07:41:24.878] STEP: delete the deployment
I0330 07:41:24.878] STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs
I0330 07:41:24.878] STEP: Gathering metrics
I0330 07:41:24.879] W0330 06:47:31.654047      19 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
I0330 07:41:24.879] Mar 30 06:48:33.673: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
I0330 07:41:24.879] [AfterEach] [sig-api-machinery] Garbage collector
I0330 07:41:24.879]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.879] Mar 30 06:48:33.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.879] STEP: Destroying namespace "gc-2210" for this suite.
I0330 07:41:24.879] 
I0330 07:41:24.879] • [SLOW TEST:63.130 seconds]
I0330 07:41:24.880] [sig-api-machinery] Garbage collector
I0330 07:41:24.880] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:24.880]   should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
I0330 07:41:24.880]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.881] ------------------------------
I0330 07:41:24.881] {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":340,"completed":180,"skipped":2709,"failed":0}
I0330 07:41:24.881] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.881] ------------------------------
I0330 07:41:24.881] [sig-storage] EmptyDir volumes 
I0330 07:41:24.881]   should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.881]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.881] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:24.883] I0330 06:48:33.729767      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.883] I0330 06:48:33.729889      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.883] I0330 06:48:33.734478      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.883] [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.883]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.883] STEP: Creating a pod to test emptydir 0644 on node default medium
I0330 07:41:24.884] Mar 30 06:48:33.743: INFO: Waiting up to 5m0s for pod "pod-ed753365-1405-4775-9d14-4b483bd51a12" in namespace "emptydir-618" to be "Succeeded or Failed"
I0330 07:41:24.884] Mar 30 06:48:33.747: INFO: Pod "pod-ed753365-1405-4775-9d14-4b483bd51a12": Phase="Pending", Reason="", readiness=false. Elapsed: 4.235604ms
I0330 07:41:24.884] Mar 30 06:48:35.756: INFO: Pod "pod-ed753365-1405-4775-9d14-4b483bd51a12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013065351s
I0330 07:41:24.884] STEP: Saw pod success
I0330 07:41:24.884] Mar 30 06:48:35.756: INFO: Pod "pod-ed753365-1405-4775-9d14-4b483bd51a12" satisfied condition "Succeeded or Failed"
I0330 07:41:24.884] Mar 30 06:48:35.760: INFO: Trying to get logs from node kind-worker2 pod pod-ed753365-1405-4775-9d14-4b483bd51a12 container test-container: <nil>
I0330 07:41:24.884] STEP: delete the pod
I0330 07:41:24.885] Mar 30 06:48:35.784: INFO: Waiting for pod pod-ed753365-1405-4775-9d14-4b483bd51a12 to disappear
I0330 07:41:24.885] Mar 30 06:48:35.787: INFO: Pod pod-ed753365-1405-4775-9d14-4b483bd51a12 no longer exists
I0330 07:41:24.885] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:24.885]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.885] Mar 30 06:48:35.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.885] STEP: Destroying namespace "emptydir-618" for this suite.
I0330 07:41:24.885] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":181,"skipped":2754,"failed":0}
I0330 07:41:24.886] SSSSSS
I0330 07:41:24.886] ------------------------------
I0330 07:41:24.886] [sig-cli] Kubectl client Update Demo 
I0330 07:41:24.886]   should scale a replication controller  [Conformance]
I0330 07:41:24.886]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.886] [BeforeEach] [sig-cli] Kubectl client
... skipping 136 lines ...
I0330 07:41:24.905] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0330 07:41:24.905]   Update Demo
I0330 07:41:24.905]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:291
I0330 07:41:24.905]     should scale a replication controller  [Conformance]
I0330 07:41:24.905]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.905] ------------------------------
I0330 07:41:24.905] {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":340,"completed":182,"skipped":2760,"failed":0}
I0330 07:41:24.905] SS
I0330 07:41:24.905] ------------------------------
I0330 07:41:24.905] [sig-storage] Projected combined 
I0330 07:41:24.906]   should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
I0330 07:41:24.906]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.906] [BeforeEach] [sig-storage] Projected combined
... skipping 10 lines ...
I0330 07:41:24.908] [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
I0330 07:41:24.908]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.908] I0330 06:48:50.318713      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.908] STEP: Creating configMap with name configmap-projected-all-test-volume-88733ed5-eba7-46c1-95c9-3a9bb9b64f72
I0330 07:41:24.909] STEP: Creating secret with name secret-projected-all-test-volume-dba9d5c0-5292-48bd-81ae-b9dcfa893307
I0330 07:41:24.909] STEP: Creating a pod to test Check all projections for projected volume plugin
I0330 07:41:24.909] Mar 30 06:48:50.329: INFO: Waiting up to 5m0s for pod "projected-volume-42ffd013-950d-417c-a544-439719d7a037" in namespace "projected-2088" to be "Succeeded or Failed"
I0330 07:41:24.909] Mar 30 06:48:50.332: INFO: Pod "projected-volume-42ffd013-950d-417c-a544-439719d7a037": Phase="Pending", Reason="", readiness=false. Elapsed: 3.488589ms
I0330 07:41:24.910] Mar 30 06:48:52.339: INFO: Pod "projected-volume-42ffd013-950d-417c-a544-439719d7a037": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009860254s
I0330 07:41:24.910] STEP: Saw pod success
I0330 07:41:24.910] Mar 30 06:48:52.339: INFO: Pod "projected-volume-42ffd013-950d-417c-a544-439719d7a037" satisfied condition "Succeeded or Failed"
I0330 07:41:24.910] Mar 30 06:48:52.341: INFO: Trying to get logs from node kind-worker2 pod projected-volume-42ffd013-950d-417c-a544-439719d7a037 container projected-all-volume-test: <nil>
I0330 07:41:24.910] STEP: delete the pod
I0330 07:41:24.911] Mar 30 06:48:52.356: INFO: Waiting for pod projected-volume-42ffd013-950d-417c-a544-439719d7a037 to disappear
I0330 07:41:24.911] Mar 30 06:48:52.359: INFO: Pod projected-volume-42ffd013-950d-417c-a544-439719d7a037 no longer exists
I0330 07:41:24.911] [AfterEach] [sig-storage] Projected combined
I0330 07:41:24.911]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.911] Mar 30 06:48:52.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.912] STEP: Destroying namespace "projected-2088" for this suite.
I0330 07:41:24.912] •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":340,"completed":183,"skipped":2762,"failed":0}
I0330 07:41:24.912] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.912] ------------------------------
I0330 07:41:24.912] [sig-node] Downward API 
I0330 07:41:24.913]   should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
I0330 07:41:24.913]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.913] [BeforeEach] [sig-node] Downward API
... skipping 8 lines ...
I0330 07:41:24.915] I0330 06:48:52.396482      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.915] I0330 06:48:52.396540      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.916] I0330 06:48:52.399158      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.916] [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
I0330 07:41:24.916]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.916] STEP: Creating a pod to test downward api env vars
I0330 07:41:24.917] Mar 30 06:48:52.404: INFO: Waiting up to 5m0s for pod "downward-api-3ca9ff53-f547-4325-b62e-52864604280a" in namespace "downward-api-4402" to be "Succeeded or Failed"
I0330 07:41:24.917] Mar 30 06:48:52.407: INFO: Pod "downward-api-3ca9ff53-f547-4325-b62e-52864604280a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.507593ms
I0330 07:41:24.917] Mar 30 06:48:54.414: INFO: Pod "downward-api-3ca9ff53-f547-4325-b62e-52864604280a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010019535s
I0330 07:41:24.917] STEP: Saw pod success
I0330 07:41:24.918] Mar 30 06:48:54.415: INFO: Pod "downward-api-3ca9ff53-f547-4325-b62e-52864604280a" satisfied condition "Succeeded or Failed"
I0330 07:41:24.918] Mar 30 06:48:54.417: INFO: Trying to get logs from node kind-worker pod downward-api-3ca9ff53-f547-4325-b62e-52864604280a container dapi-container: <nil>
I0330 07:41:24.918] STEP: delete the pod
I0330 07:41:24.918] Mar 30 06:48:54.441: INFO: Waiting for pod downward-api-3ca9ff53-f547-4325-b62e-52864604280a to disappear
I0330 07:41:24.918] Mar 30 06:48:54.444: INFO: Pod downward-api-3ca9ff53-f547-4325-b62e-52864604280a no longer exists
I0330 07:41:24.918] [AfterEach] [sig-node] Downward API
I0330 07:41:24.918]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.919] Mar 30 06:48:54.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.919] STEP: Destroying namespace "downward-api-4402" for this suite.
I0330 07:41:24.919] •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":340,"completed":184,"skipped":2794,"failed":0}
I0330 07:41:24.919] SSSSSSS
I0330 07:41:24.919] ------------------------------
I0330 07:41:24.919] [sig-apps] Daemon set [Serial] 
I0330 07:41:24.919]   should list all daemon and delete a collection of daemons with a label selector [Conformance]
I0330 07:41:24.919]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.920] [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 15 lines ...
I0330 07:41:24.923] STEP: Creating a DaemonSet "daemon-set"
I0330 07:41:24.923] STEP: List all Daemonsets
I0330 07:41:24.924] STEP: Delete a collection of Daemonsets with a label selector
I0330 07:41:24.924] STEP: Wait for the daemon set to be completely deleted
I0330 07:41:24.924] I0330 06:49:39.033682      19 reflector.go:530] k8s.io/kubernetes/test/e2e/node/taints.go:146: Watch close - *v1.Pod total 8 items received
I0330 07:41:24.924] I0330 06:52:09.787060      19 reflector.go:530] k8s.io/kubernetes/test/e2e/node/taints.go:146: Watch close - *v1.Pod total 16 items received
I0330 07:41:24.924] Mar 30 06:53:54.513: FAIL: error waiting for the daemon set to be completely deleted
I0330 07:41:24.925] Unexpected error:
I0330 07:41:24.925]     <*errors.errorString | 0xc000240240>: {
I0330 07:41:24.925]         s: "timed out waiting for the condition",
I0330 07:41:24.925]     }
I0330 07:41:24.925]     timed out waiting for the condition
I0330 07:41:24.925] occurred
I0330 07:41:24.925] 
... skipping 30 lines ...
I0330 07:41:24.931] STEP: Collecting events from namespace "daemonsets-6734".
I0330 07:41:24.931] STEP: Found 13 events.
I0330 07:41:24.931] Mar 30 06:54:03.298: INFO: At 2021-03-30 06:48:54 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-vb459
I0330 07:41:24.932] Mar 30 06:54:03.298: INFO: At 2021-03-30 06:48:54 +0000 UTC - event for daemon-set: {daemonset-controller } SuccessfulCreate: Created pod: daemon-set-6gtt7
I0330 07:41:24.932] Mar 30 06:54:03.298: INFO: At 2021-03-30 06:48:54 +0000 UTC - event for daemon-set-6gtt7: {default-scheduler } Scheduled: Successfully assigned daemonsets-6734/daemon-set-6gtt7 to kind-worker2
I0330 07:41:24.932] Mar 30 06:54:03.298: INFO: At 2021-03-30 06:48:54 +0000 UTC - event for daemon-set-vb459: {default-scheduler } Scheduled: Successfully assigned daemonsets-6734/daemon-set-vb459 to kind-worker
I0330 07:41:24.932] Mar 30 06:54:03.298: INFO: At 2021-03-30 06:48:55 +0000 UTC - event for daemon-set-6gtt7: {kubelet kind-worker2} FailedMount: MountVolume.SetUp failed for volume "kube-api-access-58r7k" : failed to sync configmap cache: timed out waiting for the condition
I0330 07:41:24.932] Mar 30 06:54:03.298: INFO: At 2021-03-30 06:48:55 +0000 UTC - event for daemon-set-vb459: {kubelet kind-worker} Started: Started container app
I0330 07:41:24.933] Mar 30 06:54:03.298: INFO: At 2021-03-30 06:48:55 +0000 UTC - event for daemon-set-vb459: {kubelet kind-worker} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
I0330 07:41:24.933] Mar 30 06:54:03.298: INFO: At 2021-03-30 06:48:55 +0000 UTC - event for daemon-set-vb459: {kubelet kind-worker} Created: Created container app
I0330 07:41:24.933] Mar 30 06:54:03.298: INFO: At 2021-03-30 06:48:56 +0000 UTC - event for daemon-set-6gtt7: {kubelet kind-worker2} Started: Started container app
I0330 07:41:24.933] Mar 30 06:54:03.298: INFO: At 2021-03-30 06:48:56 +0000 UTC - event for daemon-set-6gtt7: {kubelet kind-worker2} Created: Created container app
I0330 07:41:24.933] Mar 30 06:54:03.298: INFO: At 2021-03-30 06:48:56 +0000 UTC - event for daemon-set-6gtt7: {kubelet kind-worker2} Pulled: Container image "k8s.gcr.io/e2e-test-images/httpd:2.4.38-1" already present on machine
... skipping 65 lines ...
I0330 07:41:24.956] • Failure [309.207 seconds]
I0330 07:41:24.956] [sig-apps] Daemon set [Serial]
I0330 07:41:24.956] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:24.956]   should list all daemon and delete a collection of daemons with a label selector [Conformance] [It]
I0330 07:41:24.956]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.956] 
I0330 07:41:24.956]   Mar 30 06:53:54.513: error waiting for the daemon set to be completely deleted
I0330 07:41:24.956]   Unexpected error:
I0330 07:41:24.957]       <*errors.errorString | 0xc000240240>: {
I0330 07:41:24.957]           s: "timed out waiting for the condition",
I0330 07:41:24.957]       }
I0330 07:41:24.957]       timed out waiting for the condition
I0330 07:41:24.957]   occurred
I0330 07:41:24.957] 
I0330 07:41:24.957]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:201
I0330 07:41:24.957] ------------------------------
I0330 07:41:24.958] {"msg":"FAILED [sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]","total":340,"completed":184,"skipped":2801,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:24.958] SSSSS
I0330 07:41:24.958] ------------------------------
I0330 07:41:24.958] [sig-node] Pods 
I0330 07:41:24.958]   should be updated [NodeConformance] [Conformance]
I0330 07:41:24.958]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.958] [BeforeEach] [sig-node] Pods
... skipping 22 lines ...
I0330 07:41:24.961] STEP: verifying the updated pod is in kubernetes
I0330 07:41:24.961] Mar 30 06:54:06.232: INFO: Pod update OK
I0330 07:41:24.961] [AfterEach] [sig-node] Pods
I0330 07:41:24.961]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.962] Mar 30 06:54:06.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.962] STEP: Destroying namespace "pods-800" for this suite.
I0330 07:41:24.962] •{"msg":"PASSED [sig-node] Pods should be updated [NodeConformance] [Conformance]","total":340,"completed":185,"skipped":2806,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:24.962] SSSSS
I0330 07:41:24.962] ------------------------------
I0330 07:41:24.962] [sig-node] Security Context when creating containers with AllowPrivilegeEscalation 
I0330 07:41:24.963]   should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.963]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.963] [BeforeEach] [sig-node] Security Context
... skipping 9 lines ...
I0330 07:41:24.964] I0330 06:54:06.269782      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.964] [BeforeEach] [sig-node] Security Context
I0330 07:41:24.964]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/security_context.go:46
I0330 07:41:24.965] I0330 06:54:06.272801      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.965] [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.965]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.965] Mar 30 06:54:06.279: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-0d6ed8ee-d93a-4889-9e54-629ea2cd3efb" in namespace "security-context-test-646" to be "Succeeded or Failed"
I0330 07:41:24.965] Mar 30 06:54:06.281: INFO: Pod "alpine-nnp-false-0d6ed8ee-d93a-4889-9e54-629ea2cd3efb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.484447ms
I0330 07:41:24.965] Mar 30 06:54:08.286: INFO: Pod "alpine-nnp-false-0d6ed8ee-d93a-4889-9e54-629ea2cd3efb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007426144s
I0330 07:41:24.965] Mar 30 06:54:08.286: INFO: Pod "alpine-nnp-false-0d6ed8ee-d93a-4889-9e54-629ea2cd3efb" satisfied condition "Succeeded or Failed"
I0330 07:41:24.966] [AfterEach] [sig-node] Security Context
I0330 07:41:24.966]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.966] Mar 30 06:54:08.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.966] STEP: Destroying namespace "security-context-test-646" for this suite.
I0330 07:41:24.966] •{"msg":"PASSED [sig-node] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":186,"skipped":2811,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:24.966] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:24.967] ------------------------------
I0330 07:41:24.967] [sig-instrumentation] Events API 
I0330 07:41:24.967]   should delete a collection of events [Conformance]
I0330 07:41:24.967]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.967] [BeforeEach] [sig-instrumentation] Events API
... skipping 18 lines ...
I0330 07:41:24.969] Mar 30 06:54:08.340: INFO: requesting DeleteCollection of events
I0330 07:41:24.969] STEP: check that the list of events matches the requested quantity
I0330 07:41:24.970] [AfterEach] [sig-instrumentation] Events API
I0330 07:41:24.970]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.970] Mar 30 06:54:08.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.970] STEP: Destroying namespace "events-3528" for this suite.
I0330 07:41:24.970] •{"msg":"PASSED [sig-instrumentation] Events API should delete a collection of events [Conformance]","total":340,"completed":187,"skipped":2874,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:24.970] S
I0330 07:41:24.970] ------------------------------
I0330 07:41:24.970] [sig-storage] EmptyDir volumes 
I0330 07:41:24.971]   should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.971]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.971] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:24.972] I0330 06:54:08.383691      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.972] I0330 06:54:08.383716      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.973] I0330 06:54:08.386383      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:24.973] [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:24.973]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.973] STEP: Creating a pod to test emptydir 0644 on node default medium
I0330 07:41:24.973] Mar 30 06:54:08.392: INFO: Waiting up to 5m0s for pod "pod-a58ef72b-2832-4f31-b0ff-ab46a98b8fa7" in namespace "emptydir-2161" to be "Succeeded or Failed"
I0330 07:41:24.973] Mar 30 06:54:08.394: INFO: Pod "pod-a58ef72b-2832-4f31-b0ff-ab46a98b8fa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164549ms
I0330 07:41:24.973] Mar 30 06:54:10.401: INFO: Pod "pod-a58ef72b-2832-4f31-b0ff-ab46a98b8fa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008925975s
I0330 07:41:24.973] STEP: Saw pod success
I0330 07:41:24.974] Mar 30 06:54:10.401: INFO: Pod "pod-a58ef72b-2832-4f31-b0ff-ab46a98b8fa7" satisfied condition "Succeeded or Failed"
I0330 07:41:24.974] Mar 30 06:54:10.403: INFO: Trying to get logs from node kind-worker2 pod pod-a58ef72b-2832-4f31-b0ff-ab46a98b8fa7 container test-container: <nil>
I0330 07:41:24.974] STEP: delete the pod
I0330 07:41:24.974] Mar 30 06:54:10.421: INFO: Waiting for pod pod-a58ef72b-2832-4f31-b0ff-ab46a98b8fa7 to disappear
I0330 07:41:24.974] Mar 30 06:54:10.424: INFO: Pod pod-a58ef72b-2832-4f31-b0ff-ab46a98b8fa7 no longer exists
I0330 07:41:24.974] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:24.974]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:24.974] Mar 30 06:54:10.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.974] STEP: Destroying namespace "emptydir-2161" for this suite.
I0330 07:41:24.975] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":188,"skipped":2875,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:24.975] SSSSSSSSSSSSSSSSSS
I0330 07:41:24.975] ------------------------------
I0330 07:41:24.975] [sig-network] Services 
I0330 07:41:24.975]   should be able to change the type from ExternalName to ClusterIP [Conformance]
I0330 07:41:24.975]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.975] [BeforeEach] [sig-network] Services
... skipping 47 lines ...
I0330 07:41:24.983] • [SLOW TEST:7.936 seconds]
I0330 07:41:24.983] [sig-network] Services
I0330 07:41:24.983] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:24.983]   should be able to change the type from ExternalName to ClusterIP [Conformance]
I0330 07:41:24.984]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.984] ------------------------------
I0330 07:41:24.984] {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":340,"completed":189,"skipped":2893,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:24.984] SS
I0330 07:41:24.984] ------------------------------
I0330 07:41:24.984] [sig-network] Services 
I0330 07:41:24.984]   should complete a service status lifecycle [Conformance]
I0330 07:41:24.984]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:24.984] [BeforeEach] [sig-network] Services
... skipping 63 lines ...
I0330 07:41:24.997] Mar 30 06:54:18.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:24.998] I0330 06:54:18.474022      19 retrywatcher.go:147] "Stopping RetryWatcher."
I0330 07:41:24.998] I0330 06:54:18.474068      19 retrywatcher.go:275] Stopping RetryWatcher.
I0330 07:41:24.998] STEP: Destroying namespace "services-1007" for this suite.
I0330 07:41:24.998] [AfterEach] [sig-network] Services
I0330 07:41:24.998]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
I0330 07:41:24.999] •{"msg":"PASSED [sig-network] Services should complete a service status lifecycle [Conformance]","total":340,"completed":190,"skipped":2895,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:24.999] SSS
I0330 07:41:24.999] ------------------------------
I0330 07:41:24.999] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
I0330 07:41:25.000]   listing custom resource definition objects works  [Conformance]
I0330 07:41:25.000]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.000] [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 21 lines ...
I0330 07:41:25.004] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.004]   Simple CustomResourceDefinition
I0330 07:41:25.004]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
I0330 07:41:25.004]     listing custom resource definition objects works  [Conformance]
I0330 07:41:25.004]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.004] ------------------------------
I0330 07:41:25.005] {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":340,"completed":191,"skipped":2898,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.005] SSS
I0330 07:41:25.005] ------------------------------
I0330 07:41:25.005] [sig-apps] Daemon set [Serial] 
I0330 07:41:25.005]   should rollback without unnecessary restarts [Conformance]
I0330 07:41:25.005]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.005] [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 71 lines ...
I0330 07:41:25.017] • [SLOW TEST:28.821 seconds]
I0330 07:41:25.017] [sig-apps] Daemon set [Serial]
I0330 07:41:25.018] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:25.018]   should rollback without unnecessary restarts [Conformance]
I0330 07:41:25.018]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.018] ------------------------------
I0330 07:41:25.018] {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":340,"completed":192,"skipped":2901,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.018] SSSSS
I0330 07:41:25.018] ------------------------------
I0330 07:41:25.018] [sig-storage] Downward API volume 
I0330 07:41:25.018]   should provide container's cpu limit [NodeConformance] [Conformance]
I0330 07:41:25.019]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.019] [BeforeEach] [sig-storage] Downward API volume
... skipping 10 lines ...
I0330 07:41:25.021] [BeforeEach] [sig-storage] Downward API volume
I0330 07:41:25.021]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
I0330 07:41:25.021] [It] should provide container's cpu limit [NodeConformance] [Conformance]
I0330 07:41:25.021]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.021] I0330 06:54:53.572865      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.022] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:25.022] Mar 30 06:54:53.579: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50bae099-3d4d-497f-a19e-3584b53795fe" in namespace "downward-api-9583" to be "Succeeded or Failed"
I0330 07:41:25.022] Mar 30 06:54:53.583: INFO: Pod "downwardapi-volume-50bae099-3d4d-497f-a19e-3584b53795fe": Phase="Pending", Reason="", readiness=false. Elapsed: 3.778755ms
I0330 07:41:25.022] Mar 30 06:54:55.588: INFO: Pod "downwardapi-volume-50bae099-3d4d-497f-a19e-3584b53795fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008788359s
I0330 07:41:25.022] STEP: Saw pod success
I0330 07:41:25.022] Mar 30 06:54:55.588: INFO: Pod "downwardapi-volume-50bae099-3d4d-497f-a19e-3584b53795fe" satisfied condition "Succeeded or Failed"
I0330 07:41:25.023] Mar 30 06:54:55.591: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-50bae099-3d4d-497f-a19e-3584b53795fe container client-container: <nil>
I0330 07:41:25.023] STEP: delete the pod
I0330 07:41:25.023] Mar 30 06:54:55.606: INFO: Waiting for pod downwardapi-volume-50bae099-3d4d-497f-a19e-3584b53795fe to disappear
I0330 07:41:25.023] Mar 30 06:54:55.608: INFO: Pod downwardapi-volume-50bae099-3d4d-497f-a19e-3584b53795fe no longer exists
I0330 07:41:25.023] [AfterEach] [sig-storage] Downward API volume
I0330 07:41:25.023]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.023] Mar 30 06:54:55.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.023] STEP: Destroying namespace "downward-api-9583" for this suite.
I0330 07:41:25.024] •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":340,"completed":193,"skipped":2906,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.024] SSS
I0330 07:41:25.024] ------------------------------
I0330 07:41:25.024] [sig-node] Docker Containers 
I0330 07:41:25.024]   should be able to override the image's default command and arguments [NodeConformance] [Conformance]
I0330 07:41:25.024]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.024] [BeforeEach] [sig-node] Docker Containers
... skipping 8 lines ...
I0330 07:41:25.026] I0330 06:54:55.643191      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.026] I0330 06:54:55.643204      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.026] [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
I0330 07:41:25.026]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.026] STEP: Creating a pod to test override all
I0330 07:41:25.026] I0330 06:54:55.645511      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.026] Mar 30 06:54:55.651: INFO: Waiting up to 5m0s for pod "client-containers-1b6796c4-a6f8-4155-8f7b-c77ce23ca52c" in namespace "containers-3874" to be "Succeeded or Failed"
I0330 07:41:25.027] Mar 30 06:54:55.654: INFO: Pod "client-containers-1b6796c4-a6f8-4155-8f7b-c77ce23ca52c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.511522ms
I0330 07:41:25.027] Mar 30 06:54:57.658: INFO: Pod "client-containers-1b6796c4-a6f8-4155-8f7b-c77ce23ca52c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007039764s
I0330 07:41:25.027] STEP: Saw pod success
I0330 07:41:25.027] Mar 30 06:54:57.658: INFO: Pod "client-containers-1b6796c4-a6f8-4155-8f7b-c77ce23ca52c" satisfied condition "Succeeded or Failed"
I0330 07:41:25.027] Mar 30 06:54:57.661: INFO: Trying to get logs from node kind-worker2 pod client-containers-1b6796c4-a6f8-4155-8f7b-c77ce23ca52c container agnhost-container: <nil>
I0330 07:41:25.027] STEP: delete the pod
I0330 07:41:25.028] Mar 30 06:54:57.678: INFO: Waiting for pod client-containers-1b6796c4-a6f8-4155-8f7b-c77ce23ca52c to disappear
I0330 07:41:25.028] Mar 30 06:54:57.680: INFO: Pod client-containers-1b6796c4-a6f8-4155-8f7b-c77ce23ca52c no longer exists
I0330 07:41:25.028] [AfterEach] [sig-node] Docker Containers
I0330 07:41:25.028]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.028] Mar 30 06:54:57.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.028] STEP: Destroying namespace "containers-3874" for this suite.
I0330 07:41:25.028] •{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":340,"completed":194,"skipped":2909,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.029] SSSS
I0330 07:41:25.029] ------------------------------
I0330 07:41:25.029] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
I0330 07:41:25.029]   should execute prestop exec hook properly [NodeConformance] [Conformance]
I0330 07:41:25.029]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.029] [BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 44 lines ...
I0330 07:41:25.039] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:25.039]   when create a pod with lifecycle hook
I0330 07:41:25.039]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
I0330 07:41:25.039]     should execute prestop exec hook properly [NodeConformance] [Conformance]
I0330 07:41:25.039]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.039] ------------------------------
I0330 07:41:25.040] {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":340,"completed":195,"skipped":2913,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.040] SSSSSSSSSSSS
I0330 07:41:25.040] ------------------------------
I0330 07:41:25.041] [sig-network] Services 
I0330 07:41:25.041]   should be able to change the type from NodePort to ExternalName [Conformance]
I0330 07:41:25.041]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.041] [BeforeEach] [sig-network] Services
... skipping 45 lines ...
I0330 07:41:25.050] • [SLOW TEST:19.766 seconds]
I0330 07:41:25.051] [sig-network] Services
I0330 07:41:25.051] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:25.051]   should be able to change the type from NodePort to ExternalName [Conformance]
I0330 07:41:25.051]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.051] ------------------------------
I0330 07:41:25.052] {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":340,"completed":196,"skipped":2925,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.052] SSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.052] ------------------------------
I0330 07:41:25.052] [sig-api-machinery] Namespaces [Serial] 
I0330 07:41:25.052]   should ensure that all services are removed when a namespace is deleted [Conformance]
I0330 07:41:25.052]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.053] [BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 37 lines ...
I0330 07:41:25.058] • [SLOW TEST:6.151 seconds]
I0330 07:41:25.058] [sig-api-machinery] Namespaces [Serial]
I0330 07:41:25.058] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.058]   should ensure that all services are removed when a namespace is deleted [Conformance]
I0330 07:41:25.058]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.059] ------------------------------
I0330 07:41:25.059] {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":340,"completed":197,"skipped":2946,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.059] SSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.059] ------------------------------
I0330 07:41:25.059] [sig-node] Pods 
I0330 07:41:25.060]   should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
I0330 07:41:25.060]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.060] [BeforeEach] [sig-node] Pods
... skipping 18 lines ...
I0330 07:41:25.063] Mar 30 06:55:39.747: INFO: The status of Pod pod-logs-websocket-dbd56a73-028f-4123-85d5-e923c2b8f8a9 is Pending, waiting for it to be Running (with Ready = true)
I0330 07:41:25.063] Mar 30 06:55:41.751: INFO: The status of Pod pod-logs-websocket-dbd56a73-028f-4123-85d5-e923c2b8f8a9 is Running (Ready = true)
I0330 07:41:25.064] [AfterEach] [sig-node] Pods
I0330 07:41:25.064]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.064] Mar 30 06:55:41.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.064] STEP: Destroying namespace "pods-5542" for this suite.
I0330 07:41:25.064] •{"msg":"PASSED [sig-node] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":340,"completed":198,"skipped":2972,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.064] SSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.064] ------------------------------
I0330 07:41:25.065] [sig-storage] EmptyDir volumes 
I0330 07:41:25.065]   should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.065]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.065] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:25.066] I0330 06:55:41.797660      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.066] I0330 06:55:41.797679      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.067] [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.067]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.067] STEP: Creating a pod to test emptydir 0777 on node default medium
I0330 07:41:25.067] I0330 06:55:41.799996      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.067] Mar 30 06:55:41.806: INFO: Waiting up to 5m0s for pod "pod-811f98b7-2307-4f53-9fc9-45172baed5e1" in namespace "emptydir-3931" to be "Succeeded or Failed"
I0330 07:41:25.067] Mar 30 06:55:41.809: INFO: Pod "pod-811f98b7-2307-4f53-9fc9-45172baed5e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.680941ms
I0330 07:41:25.068] Mar 30 06:55:43.814: INFO: Pod "pod-811f98b7-2307-4f53-9fc9-45172baed5e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.007944568s
I0330 07:41:25.068] STEP: Saw pod success
I0330 07:41:25.068] Mar 30 06:55:43.814: INFO: Pod "pod-811f98b7-2307-4f53-9fc9-45172baed5e1" satisfied condition "Succeeded or Failed"
I0330 07:41:25.068] Mar 30 06:55:43.817: INFO: Trying to get logs from node kind-worker pod pod-811f98b7-2307-4f53-9fc9-45172baed5e1 container test-container: <nil>
I0330 07:41:25.068] STEP: delete the pod
I0330 07:41:25.068] Mar 30 06:55:43.840: INFO: Waiting for pod pod-811f98b7-2307-4f53-9fc9-45172baed5e1 to disappear
I0330 07:41:25.068] Mar 30 06:55:43.842: INFO: Pod pod-811f98b7-2307-4f53-9fc9-45172baed5e1 no longer exists
I0330 07:41:25.068] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:25.068]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.069] Mar 30 06:55:43.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.069] STEP: Destroying namespace "emptydir-3931" for this suite.
I0330 07:41:25.069] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":199,"skipped":2998,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.069] SSSSSSSSSSSSSSS
I0330 07:41:25.069] ------------------------------
I0330 07:41:25.069] [sig-node] ConfigMap 
I0330 07:41:25.069]   should fail to create ConfigMap with empty key [Conformance]
I0330 07:41:25.070]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.070] [BeforeEach] [sig-node] ConfigMap
I0330 07:41:25.070]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:25.070] STEP: Creating a kubernetes client
I0330 07:41:25.070] Mar 30 06:55:43.850: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:25.070] STEP: Building a namespace api object, basename configmap
I0330 07:41:25.070] I0330 06:55:43.858109      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.070] I0330 06:55:43.858163      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.070] STEP: Waiting for a default service account to be provisioned in namespace
I0330 07:41:25.071] I0330 06:55:43.880145      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.071] I0330 06:55:43.880304      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.071] I0330 06:55:43.880324      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.071] [It] should fail to create ConfigMap with empty key [Conformance]
I0330 07:41:25.071]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.072] STEP: Creating configMap that has name configmap-test-emptyKey-89280181-1a72-4ce0-b27d-cdd8648dd40f
I0330 07:41:25.072] I0330 06:55:43.884014      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.072] [AfterEach] [sig-node] ConfigMap
I0330 07:41:25.072]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.072] Mar 30 06:55:43.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.072] STEP: Destroying namespace "configmap-2392" for this suite.
I0330 07:41:25.073] •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":340,"completed":200,"skipped":3013,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.073] SSSSSSSSSSS
I0330 07:41:25.073] ------------------------------
I0330 07:41:25.073] [sig-network] Services 
I0330 07:41:25.073]   should test the lifecycle of an Endpoint [Conformance]
I0330 07:41:25.073]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.073] [BeforeEach] [sig-network] Services
... skipping 37 lines ...
I0330 07:41:25.080] [AfterEach] [sig-network] Services
I0330 07:41:25.080]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.080] Mar 30 06:55:43.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.080] STEP: Destroying namespace "services-5854" for this suite.
I0330 07:41:25.080] [AfterEach] [sig-network] Services
I0330 07:41:25.081]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
I0330 07:41:25.081] •{"msg":"PASSED [sig-network] Services should test the lifecycle of an Endpoint [Conformance]","total":340,"completed":201,"skipped":3024,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.081] SSSSSSSSSSSSSS
I0330 07:41:25.081] ------------------------------
I0330 07:41:25.081] [sig-storage] Downward API volume 
I0330 07:41:25.082]   should provide container's memory limit [NodeConformance] [Conformance]
I0330 07:41:25.082]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.082] [BeforeEach] [sig-storage] Downward API volume
... skipping 10 lines ...
I0330 07:41:25.085] [BeforeEach] [sig-storage] Downward API volume
I0330 07:41:25.085]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
I0330 07:41:25.086] [It] should provide container's memory limit [NodeConformance] [Conformance]
I0330 07:41:25.086]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.086] I0330 06:55:43.998957      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.087] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:25.087] Mar 30 06:55:44.005: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23c6f79a-bf26-4bb2-8a89-60fe2b5e3856" in namespace "downward-api-3831" to be "Succeeded or Failed"
I0330 07:41:25.087] Mar 30 06:55:44.008: INFO: Pod "downwardapi-volume-23c6f79a-bf26-4bb2-8a89-60fe2b5e3856": Phase="Pending", Reason="", readiness=false. Elapsed: 3.028853ms
I0330 07:41:25.087] Mar 30 06:55:46.013: INFO: Pod "downwardapi-volume-23c6f79a-bf26-4bb2-8a89-60fe2b5e3856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008432424s
I0330 07:41:25.087] STEP: Saw pod success
I0330 07:41:25.088] Mar 30 06:55:46.013: INFO: Pod "downwardapi-volume-23c6f79a-bf26-4bb2-8a89-60fe2b5e3856" satisfied condition "Succeeded or Failed"
I0330 07:41:25.088] Mar 30 06:55:46.016: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-23c6f79a-bf26-4bb2-8a89-60fe2b5e3856 container client-container: <nil>
I0330 07:41:25.088] STEP: delete the pod
I0330 07:41:25.088] Mar 30 06:55:46.033: INFO: Waiting for pod downwardapi-volume-23c6f79a-bf26-4bb2-8a89-60fe2b5e3856 to disappear
I0330 07:41:25.088] Mar 30 06:55:46.036: INFO: Pod downwardapi-volume-23c6f79a-bf26-4bb2-8a89-60fe2b5e3856 no longer exists
I0330 07:41:25.088] [AfterEach] [sig-storage] Downward API volume
I0330 07:41:25.088]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.089] Mar 30 06:55:46.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.089] STEP: Destroying namespace "downward-api-3831" for this suite.
I0330 07:41:25.089] •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":340,"completed":202,"skipped":3038,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.089] SSS
I0330 07:41:25.090] ------------------------------
I0330 07:41:25.090] [sig-storage] ConfigMap 
I0330 07:41:25.090]   should be consumable from pods in volume [NodeConformance] [Conformance]
I0330 07:41:25.090]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.090] [BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
I0330 07:41:25.092] I0330 06:55:46.074528      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.093] [It] should be consumable from pods in volume [NodeConformance] [Conformance]
I0330 07:41:25.093]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.093] STEP: Creating configMap with name configmap-test-volume-ddc945bf-b685-40f1-9ece-44661a425035
I0330 07:41:25.093] I0330 06:55:46.077520      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.094] STEP: Creating a pod to test consume configMaps
I0330 07:41:25.094] Mar 30 06:55:46.087: INFO: Waiting up to 5m0s for pod "pod-configmaps-51f78e4f-0484-43b0-8e90-e09a4dd8e22e" in namespace "configmap-9896" to be "Succeeded or Failed"
I0330 07:41:25.094] Mar 30 06:55:46.090: INFO: Pod "pod-configmaps-51f78e4f-0484-43b0-8e90-e09a4dd8e22e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.797124ms
I0330 07:41:25.094] Mar 30 06:55:48.096: INFO: Pod "pod-configmaps-51f78e4f-0484-43b0-8e90-e09a4dd8e22e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009411297s
I0330 07:41:25.094] STEP: Saw pod success
I0330 07:41:25.095] Mar 30 06:55:48.096: INFO: Pod "pod-configmaps-51f78e4f-0484-43b0-8e90-e09a4dd8e22e" satisfied condition "Succeeded or Failed"
I0330 07:41:25.095] Mar 30 06:55:48.099: INFO: Trying to get logs from node kind-worker pod pod-configmaps-51f78e4f-0484-43b0-8e90-e09a4dd8e22e container agnhost-container: <nil>
I0330 07:41:25.095] STEP: delete the pod
I0330 07:41:25.095] Mar 30 06:55:48.116: INFO: Waiting for pod pod-configmaps-51f78e4f-0484-43b0-8e90-e09a4dd8e22e to disappear
I0330 07:41:25.095] Mar 30 06:55:48.119: INFO: Pod pod-configmaps-51f78e4f-0484-43b0-8e90-e09a4dd8e22e no longer exists
I0330 07:41:25.096] [AfterEach] [sig-storage] ConfigMap
I0330 07:41:25.096]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.096] Mar 30 06:55:48.119: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.096] STEP: Destroying namespace "configmap-9896" for this suite.
I0330 07:41:25.097] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":340,"completed":203,"skipped":3041,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.097] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.097] ------------------------------
I0330 07:41:25.097] [sig-node] Pods 
I0330 07:41:25.097]   should run through the lifecycle of Pods and PodStatus [Conformance]
I0330 07:41:25.097]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.097] [BeforeEach] [sig-node] Pods
... skipping 43 lines ...
I0330 07:41:25.105] [AfterEach] [sig-node] Pods
I0330 07:41:25.105]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.105] Mar 30 06:55:49.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.105] I0330 06:55:49.510755      19 retrywatcher.go:147] "Stopping RetryWatcher."
I0330 07:41:25.105] I0330 06:55:49.510941      19 retrywatcher.go:275] Stopping RetryWatcher.
I0330 07:41:25.105] STEP: Destroying namespace "pods-9975" for this suite.
I0330 07:41:25.105] •{"msg":"PASSED [sig-node] Pods should run through the lifecycle of Pods and PodStatus [Conformance]","total":340,"completed":204,"skipped":3095,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.105] SSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.106] ------------------------------
I0330 07:41:25.106] [sig-api-machinery] Garbage collector 
I0330 07:41:25.106]   should delete RS created by deployment when not orphaning [Conformance]
I0330 07:41:25.106]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.106] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 15 lines ...
I0330 07:41:25.108] STEP: delete the deployment
I0330 07:41:25.109] STEP: wait for all rs to be garbage collected
I0330 07:41:25.109] STEP: expected 0 rs, got 1 rs
I0330 07:41:25.109] STEP: expected 0 pods, got 2 pods
I0330 07:41:25.109] STEP: Gathering metrics
I0330 07:41:25.109] W0330 06:55:50.602343      19 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
I0330 07:41:25.110] Mar 30 06:56:52.619: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
I0330 07:41:25.110] [AfterEach] [sig-api-machinery] Garbage collector
I0330 07:41:25.110]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.110] Mar 30 06:56:52.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.110] STEP: Destroying namespace "gc-1515" for this suite.
I0330 07:41:25.110] 
I0330 07:41:25.110] • [SLOW TEST:63.111 seconds]
I0330 07:41:25.111] [sig-api-machinery] Garbage collector
I0330 07:41:25.111] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.111]   should delete RS created by deployment when not orphaning [Conformance]
I0330 07:41:25.111]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.112] ------------------------------
I0330 07:41:25.112] {"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":340,"completed":205,"skipped":3119,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.112] SSS
I0330 07:41:25.112] ------------------------------
I0330 07:41:25.113] [sig-network] DNS 
I0330 07:41:25.113]   should provide DNS for the cluster  [Conformance]
I0330 07:41:25.113]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.113] [BeforeEach] [sig-network] DNS
... skipping 22 lines ...
I0330 07:41:25.118] 
I0330 07:41:25.118] STEP: deleting the pod
I0330 07:41:25.118] [AfterEach] [sig-network] DNS
I0330 07:41:25.119]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.119] Mar 30 06:56:54.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.119] STEP: Destroying namespace "dns-4464" for this suite.
I0330 07:41:25.119] •{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":340,"completed":206,"skipped":3122,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.119] SSSSSSSSSSSSSSSS
I0330 07:41:25.120] ------------------------------
I0330 07:41:25.120] [sig-cli] Kubectl client Kubectl version 
I0330 07:41:25.120]   should check is all data is printed  [Conformance]
I0330 07:41:25.120]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.120] [BeforeEach] [sig-cli] Kubectl client
... skipping 16 lines ...
I0330 07:41:25.123] Mar 30 06:56:54.862: INFO: stderr: ""
I0330 07:41:25.123] Mar 30 06:56:54.862: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"22+\", GitVersion:\"v1.22.0-alpha.0.18+467457557005e0\", GitCommit:\"467457557005e0d59273852e287913b29b7c2c34\", GitTreeState:\"clean\", BuildDate:\"2021-03-30T01:03:56Z\", GoVersion:\"go1.16.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"22+\", GitVersion:\"v1.22.0-alpha.0.18+467457557005e0\", GitCommit:\"467457557005e0d59273852e287913b29b7c2c34\", GitTreeState:\"clean\", BuildDate:\"2021-03-30T01:03:56Z\", GoVersion:\"go1.16.1\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
I0330 07:41:25.123] [AfterEach] [sig-cli] Kubectl client
I0330 07:41:25.123]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.123] Mar 30 06:56:54.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.124] STEP: Destroying namespace "kubectl-1617" for this suite.
I0330 07:41:25.124] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":340,"completed":207,"skipped":3138,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.124] SSSSSSSSSSSS
I0330 07:41:25.124] ------------------------------
I0330 07:41:25.124] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:25.124]   should deny crd creation [Conformance]
I0330 07:41:25.124]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.124] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 27 lines ...
I0330 07:41:25.128]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.128] Mar 30 06:56:58.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.128] STEP: Destroying namespace "webhook-9257" for this suite.
I0330 07:41:25.128] STEP: Destroying namespace "webhook-9257-markers" for this suite.
I0330 07:41:25.129] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:25.129]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0330 07:41:25.129] •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":340,"completed":208,"skipped":3150,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.129] SSSS
I0330 07:41:25.129] ------------------------------
I0330 07:41:25.129] [sig-apps] Job 
I0330 07:41:25.129]   should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
I0330 07:41:25.130]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.130] [BeforeEach] [sig-apps] Job
I0330 07:41:25.130]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:25.130] STEP: Creating a kubernetes client
I0330 07:41:25.130] Mar 30 06:56:58.682: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:25.130] STEP: Building a namespace api object, basename job
I0330 07:41:25.130] I0330 06:56:58.696808      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.130] I0330 06:56:58.696976      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.131] STEP: Waiting for a default service account to be provisioned in namespace
I0330 07:41:25.131] I0330 06:56:58.722775      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.131] I0330 06:56:58.722958      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.131] I0330 06:56:58.722975      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.132] I0330 06:56:58.726308      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.132] [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
I0330 07:41:25.132]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.132] STEP: Creating a job
I0330 07:41:25.132] STEP: Ensuring job reaches completions
I0330 07:41:25.132] [AfterEach] [sig-apps] Job
I0330 07:41:25.132]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.132] Mar 30 06:57:04.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.132] STEP: Destroying namespace "job-6769" for this suite.
I0330 07:41:25.132] 
I0330 07:41:25.133] • [SLOW TEST:6.063 seconds]
I0330 07:41:25.133] [sig-apps] Job
I0330 07:41:25.133] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:25.133]   should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
I0330 07:41:25.133]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.133] ------------------------------
I0330 07:41:25.133] {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":340,"completed":209,"skipped":3154,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.133] SSSSSS
I0330 07:41:25.134] ------------------------------
I0330 07:41:25.134] [sig-storage] Projected configMap 
I0330 07:41:25.134]   should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.134]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.134] [BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
I0330 07:41:25.135] I0330 06:57:04.774013      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.136] [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.136]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.136] I0330 06:57:04.776725      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.136] STEP: Creating configMap with name projected-configmap-test-volume-247bfa26-51f7-4a1e-89af-137b0a96ae19
I0330 07:41:25.136] STEP: Creating a pod to test consume configMaps
I0330 07:41:25.136] Mar 30 06:57:04.786: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2d0a2d21-378f-4ac3-9e19-c226d2cf3fc1" in namespace "projected-6082" to be "Succeeded or Failed"
I0330 07:41:25.137] Mar 30 06:57:04.794: INFO: Pod "pod-projected-configmaps-2d0a2d21-378f-4ac3-9e19-c226d2cf3fc1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.988615ms
I0330 07:41:25.137] Mar 30 06:57:06.799: INFO: Pod "pod-projected-configmaps-2d0a2d21-378f-4ac3-9e19-c226d2cf3fc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013075382s
I0330 07:41:25.137] STEP: Saw pod success
I0330 07:41:25.137] Mar 30 06:57:06.799: INFO: Pod "pod-projected-configmaps-2d0a2d21-378f-4ac3-9e19-c226d2cf3fc1" satisfied condition "Succeeded or Failed"
I0330 07:41:25.137] Mar 30 06:57:06.802: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-2d0a2d21-378f-4ac3-9e19-c226d2cf3fc1 container agnhost-container: <nil>
I0330 07:41:25.137] STEP: delete the pod
I0330 07:41:25.137] Mar 30 06:57:06.816: INFO: Waiting for pod pod-projected-configmaps-2d0a2d21-378f-4ac3-9e19-c226d2cf3fc1 to disappear
I0330 07:41:25.138] Mar 30 06:57:06.818: INFO: Pod pod-projected-configmaps-2d0a2d21-378f-4ac3-9e19-c226d2cf3fc1 no longer exists
I0330 07:41:25.138] [AfterEach] [sig-storage] Projected configMap
I0330 07:41:25.138]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.138] Mar 30 06:57:06.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.138] STEP: Destroying namespace "projected-6082" for this suite.
I0330 07:41:25.138] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":210,"skipped":3160,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.138] SSS
I0330 07:41:25.138] ------------------------------
I0330 07:41:25.139] [sig-api-machinery] Watchers 
I0330 07:41:25.139]   should receive events on concurrent watches in same order [Conformance]
I0330 07:41:25.139]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.139] [BeforeEach] [sig-api-machinery] Watchers
... skipping 13 lines ...
I0330 07:41:25.141] STEP: starting a background goroutine to produce watch events
I0330 07:41:25.141] STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
I0330 07:41:25.141] [AfterEach] [sig-api-machinery] Watchers
I0330 07:41:25.141]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.142] Mar 30 06:57:10.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.142] STEP: Destroying namespace "watch-4059" for this suite.
I0330 07:41:25.142] •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":340,"completed":211,"skipped":3163,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.142] SSSSSSSSSSS
I0330 07:41:25.142] ------------------------------
I0330 07:41:25.142] [sig-cli] Kubectl client Guestbook application 
I0330 07:41:25.142]   should create and stop a working application  [Conformance]
I0330 07:41:25.142]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.143] [BeforeEach] [sig-cli] Kubectl client
... skipping 208 lines ...
I0330 07:41:25.168] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0330 07:41:25.168]   Guestbook application
I0330 07:41:25.169]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:336
I0330 07:41:25.169]     should create and stop a working application  [Conformance]
I0330 07:41:25.169]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.169] ------------------------------
I0330 07:41:25.169] {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":340,"completed":212,"skipped":3174,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.169] SSSSSSSSSSS
I0330 07:41:25.169] ------------------------------
I0330 07:41:25.169] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
I0330 07:41:25.170]   should execute poststart exec hook properly [NodeConformance] [Conformance]
I0330 07:41:25.170]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.170] [BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 36 lines ...
I0330 07:41:25.176] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:25.176]   when create a pod with lifecycle hook
I0330 07:41:25.176]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
I0330 07:41:25.176]     should execute poststart exec hook properly [NodeConformance] [Conformance]
I0330 07:41:25.176]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.176] ------------------------------
I0330 07:41:25.177] {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":340,"completed":213,"skipped":3185,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.177] SSSSSSSSSSSSSSSS
I0330 07:41:25.177] ------------------------------
I0330 07:41:25.177] [sig-apps] Deployment 
I0330 07:41:25.177]   RollingUpdateDeployment should delete old pods and create new ones [Conformance]
I0330 07:41:25.177]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.177] [BeforeEach] [sig-apps] Deployment
... skipping 40 lines ...
I0330 07:41:25.198] • [SLOW TEST:7.088 seconds]
I0330 07:41:25.198] [sig-apps] Deployment
I0330 07:41:25.198] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:25.198]   RollingUpdateDeployment should delete old pods and create new ones [Conformance]
I0330 07:41:25.199]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.199] ------------------------------
I0330 07:41:25.199] {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":340,"completed":214,"skipped":3201,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.200] SSSSSSS
I0330 07:41:25.200] ------------------------------
I0330 07:41:25.200] [sig-api-machinery] ResourceQuota 
I0330 07:41:25.200]   should verify ResourceQuota with best effort scope. [Conformance]
I0330 07:41:25.200]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.201] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 32 lines ...
I0330 07:41:25.205] • [SLOW TEST:16.160 seconds]
I0330 07:41:25.205] [sig-api-machinery] ResourceQuota
I0330 07:41:25.205] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.205]   should verify ResourceQuota with best effort scope. [Conformance]
I0330 07:41:25.205]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.205] ------------------------------
I0330 07:41:25.206] {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":340,"completed":215,"skipped":3208,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.206] SS
I0330 07:41:25.206] ------------------------------
I0330 07:41:25.206] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
I0330 07:41:25.206]   should include custom resource definition resources in discovery documents [Conformance]
I0330 07:41:25.206]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.206] [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 18 lines ...
I0330 07:41:25.209] STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
I0330 07:41:25.210] STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
I0330 07:41:25.210] [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
I0330 07:41:25.210]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.210] Mar 30 06:57:50.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.210] STEP: Destroying namespace "custom-resource-definition-1386" for this suite.
I0330 07:41:25.210] •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":340,"completed":216,"skipped":3210,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.211] S
I0330 07:41:25.211] ------------------------------
I0330 07:41:25.211] [sig-node] Probing container 
I0330 07:41:25.211]   should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
I0330 07:41:25.211]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.211] [BeforeEach] [sig-node] Probing container
... skipping 27 lines ...
I0330 07:41:25.217] • [SLOW TEST:22.150 seconds]
I0330 07:41:25.217] [sig-node] Probing container
I0330 07:41:25.218] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:25.218]   should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
I0330 07:41:25.218]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.218] ------------------------------
I0330 07:41:25.218] {"msg":"PASSED [sig-node] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":340,"completed":217,"skipped":3211,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.218] SSSSSSSSSSS
I0330 07:41:25.218] ------------------------------
I0330 07:41:25.219] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0330 07:41:25.219]   works for multiple CRDs of different groups [Conformance]
I0330 07:41:25.219]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.219] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 21 lines ...
I0330 07:41:25.223] • [SLOW TEST:14.265 seconds]
I0330 07:41:25.223] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0330 07:41:25.223] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.223]   works for multiple CRDs of different groups [Conformance]
I0330 07:41:25.224]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.224] ------------------------------
I0330 07:41:25.224] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":340,"completed":218,"skipped":3222,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.224] SSSSSSSSSS
I0330 07:41:25.224] ------------------------------
I0330 07:41:25.224] [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces 
I0330 07:41:25.224]   should list and delete a collection of PodDisruptionBudgets [Conformance]
I0330 07:41:25.225]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.225] [BeforeEach] [sig-apps] DisruptionController
... skipping 45 lines ...
I0330 07:41:25.231] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:25.231]   Listing PodDisruptionBudgets for all namespaces
I0330 07:41:25.231]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/disruption.go:75
I0330 07:41:25.231]     should list and delete a collection of PodDisruptionBudgets [Conformance]
I0330 07:41:25.231]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.232] ------------------------------
I0330 07:41:25.232] {"msg":"PASSED [sig-apps] DisruptionController Listing PodDisruptionBudgets for all namespaces should list and delete a collection of PodDisruptionBudgets [Conformance]","total":340,"completed":219,"skipped":3232,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.232] SSS
I0330 07:41:25.232] ------------------------------
I0330 07:41:25.232] [sig-node] Events 
I0330 07:41:25.232]   should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
I0330 07:41:25.233]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.233] [BeforeEach] [sig-node] Events
... skipping 30 lines ...
I0330 07:41:25.242] • [SLOW TEST:6.086 seconds]
I0330 07:41:25.243] [sig-node] Events
I0330 07:41:25.243] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/framework.go:23
I0330 07:41:25.243]   should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
I0330 07:41:25.243]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.243] ------------------------------
I0330 07:41:25.244] {"msg":"PASSED [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":340,"completed":220,"skipped":3235,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.244] SSSSSSSSSSSSSSS
I0330 07:41:25.244] ------------------------------
I0330 07:41:25.244] [sig-api-machinery] ResourceQuota 
I0330 07:41:25.244]   should create a ResourceQuota and capture the life of a service. [Conformance]
I0330 07:41:25.244]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.244] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 27 lines ...
I0330 07:41:25.248] • [SLOW TEST:11.185 seconds]
I0330 07:41:25.248] [sig-api-machinery] ResourceQuota
I0330 07:41:25.248] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.248]   should create a ResourceQuota and capture the life of a service. [Conformance]
I0330 07:41:25.249]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.249] ------------------------------
I0330 07:41:25.249] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":340,"completed":221,"skipped":3250,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.249] SSSSSSSSSSSSSSSSS
I0330 07:41:25.249] ------------------------------
I0330 07:41:25.249] [sig-node] Kubelet when scheduling a read only busybox container 
I0330 07:41:25.249]   should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.250]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.250] [BeforeEach] [sig-node] Kubelet
... skipping 15 lines ...
I0330 07:41:25.253] Mar 30 06:58:49.915: INFO: The status of Pod busybox-readonly-fsb14c0272-26df-467c-8938-2675effda54f is Pending, waiting for it to be Running (with Ready = true)
I0330 07:41:25.253] Mar 30 06:58:51.923: INFO: The status of Pod busybox-readonly-fsb14c0272-26df-467c-8938-2675effda54f is Running (Ready = true)
I0330 07:41:25.254] [AfterEach] [sig-node] Kubelet
I0330 07:41:25.254]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.254] Mar 30 06:58:51.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.254] STEP: Destroying namespace "kubelet-test-3730" for this suite.
I0330 07:41:25.255] •{"msg":"PASSED [sig-node] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":222,"skipped":3267,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.255] SSSS
I0330 07:41:25.255] ------------------------------
I0330 07:41:25.255] [sig-apps] Deployment 
I0330 07:41:25.255]   Deployment should have a working scale subresource [Conformance]
I0330 07:41:25.255]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.255] [BeforeEach] [sig-apps] Deployment
... skipping 30 lines ...
I0330 07:41:25.270] Mar 30 06:58:54.032: INFO: Pod "test-new-deployment-847dcfb7fb-pr9dn" is not available:
I0330 07:41:25.274] &Pod{ObjectMeta:{test-new-deployment-847dcfb7fb-pr9dn test-new-deployment-847dcfb7fb- deployment-2224  c6dbbb0d-15ca-46c6-bf46-3de13ea468c5 20775 0 2021-03-30 06:58:54 +0000 UTC <nil> <nil> map[name:httpd pod-template-hash:847dcfb7fb] map[] [{apps/v1 ReplicaSet test-new-deployment-847dcfb7fb 9b7900dd-5998-4fa0-b6b5-d55627f775c3 0xc0039d1dc0 0xc0039d1dc1}] []  [{kube-controller-manager Update v1 2021-03-30 06:58:54 +0000 UTC FieldsV1 {"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:name":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"9b7900dd-5998-4fa0-b6b5-d55627f775c3\"}":{".":{},"f:apiVersion":{},"f:blockOwnerDeletion":{},"f:controller":{},"f:kind":{},"f:name":{},"f:uid":{}}}},"f:spec":{"f:containers":{"k:{\"name\":\"httpd\"}":{".":{},"f:image":{},"f:imagePullPolicy":{},"f:name":{},"f:resources":{},"f:securityContext":{},"f:terminationMessagePath":{},"f:terminationMessagePolicy":{}}},"f:dnsPolicy":{},"f:enableServiceLinks":{},"f:restartPolicy":{},"f:schedulerName":{},"f:securityContext":{},"f:terminationGracePeriodSeconds":{}}}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:kube-api-access-xqbbv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:nil,NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:&ProjectedVolumeSource{Sources:[]VolumeProjection{VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:nil,ServiceAccountToken:&ServiceAccountTokenProjection{Audience:,ExpirationSeconds:*3607,Path:token,},},VolumeProjection{Secret:nil,DownwardAPI:nil,ConfigMap:&ConfigMapProjection{LocalObjectReference:LocalObjectReference{Name:kube-root-ca.crt,},Items:[]KeyToPath{KeyToPath{Key:ca.crt,Path:ca.crt,Mode:nil,},},Optional:nil,},ServiceAccountToken:nil,},VolumeProjection{Secret:nil,DownwardAPI:&DownwardAPIProjection{Items:[]DownwardAPIVolumeFile{DownwardAPIVolumeFile{Path:namespace,FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,Mode:nil,},},},ConfigMap:nil,ServiceAccountToken:nil,},},DefaultMode:*420,},StorageOS:nil,CSI:nil,Ephemeral:nil,},},},Containers:[]Container{Container{Name:httpd,Image:k8s.gcr.io/e2e-test-images/httpd:2.4.38-1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xqbbv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kind-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,SeccompProfile:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:*PreemptLowerPriority,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},SetHostnameAsFQDN:nil,},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2021-03-30 06:58:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:<nil>,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
I0330 07:41:25.274] [AfterEach] [sig-apps] Deployment
I0330 07:41:25.274]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.274] Mar 30 06:58:54.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.274] STEP: Destroying namespace "deployment-2224" for this suite.
I0330 07:41:25.274] •{"msg":"PASSED [sig-apps] Deployment Deployment should have a working scale subresource [Conformance]","total":340,"completed":223,"skipped":3271,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.275] SSSSSSSSSSSS
I0330 07:41:25.275] ------------------------------
I0330 07:41:25.275] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0330 07:41:25.275]   works for CRD preserving unknown fields in an embedded object [Conformance]
I0330 07:41:25.275]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.275] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 36 lines ...
I0330 07:41:25.282] • [SLOW TEST:6.431 seconds]
I0330 07:41:25.282] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0330 07:41:25.282] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.282]   works for CRD preserving unknown fields in an embedded object [Conformance]
I0330 07:41:25.282]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.282] ------------------------------
I0330 07:41:25.283] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":340,"completed":224,"skipped":3283,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.283] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.283] ------------------------------
I0330 07:41:25.283] [sig-api-machinery] Namespaces [Serial] 
I0330 07:41:25.283]   should patch a Namespace [Conformance]
I0330 07:41:25.283]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.284] [BeforeEach] [sig-api-machinery] Namespaces [Serial]
... skipping 18 lines ...
I0330 07:41:25.286] STEP: get the Namespace and ensuring it has the label
I0330 07:41:25.286] [AfterEach] [sig-api-machinery] Namespaces [Serial]
I0330 07:41:25.287]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.287] Mar 30 06:59:00.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.287] STEP: Destroying namespace "namespaces-8201" for this suite.
I0330 07:41:25.287] STEP: Destroying namespace "nspatchtest-e7743141-cd63-4981-ad3a-42c7ee54521a-9547" for this suite.
I0330 07:41:25.287] •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":340,"completed":225,"skipped":3318,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.287] S
I0330 07:41:25.288] ------------------------------
I0330 07:41:25.288] [sig-cli] Kubectl client Kubectl patch 
I0330 07:41:25.288]   should add annotations for pods in rc  [Conformance]
I0330 07:41:25.288]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.288] [BeforeEach] [sig-cli] Kubectl client
... skipping 32 lines ...
I0330 07:41:25.293] Mar 30 06:59:03.004: INFO: Selector matched 1 pods for map[app:agnhost]
I0330 07:41:25.293] Mar 30 06:59:03.004: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
I0330 07:41:25.293] [AfterEach] [sig-cli] Kubectl client
I0330 07:41:25.293]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.293] Mar 30 06:59:03.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.293] STEP: Destroying namespace "kubectl-9618" for this suite.
I0330 07:41:25.294] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":340,"completed":226,"skipped":3319,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.294] SS
I0330 07:41:25.294] ------------------------------
I0330 07:41:25.294] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:25.294]   listing mutating webhooks should work [Conformance]
I0330 07:41:25.294]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.294] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 28 lines ...
I0330 07:41:25.299]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.299] Mar 30 06:59:06.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.299] STEP: Destroying namespace "webhook-6779" for this suite.
I0330 07:41:25.299] STEP: Destroying namespace "webhook-6779-markers" for this suite.
I0330 07:41:25.299] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:25.299]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0330 07:41:25.300] •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":340,"completed":227,"skipped":3321,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.300] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.300] ------------------------------
I0330 07:41:25.300] [sig-node] Secrets 
I0330 07:41:25.300]   should be consumable from pods in env vars [NodeConformance] [Conformance]
I0330 07:41:25.300]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.300] [BeforeEach] [sig-node] Secrets
... skipping 9 lines ...
I0330 07:41:25.302] I0330 06:59:06.661566      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.302] [It] should be consumable from pods in env vars [NodeConformance] [Conformance]
I0330 07:41:25.302]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.302] STEP: Creating secret with name secret-test-89a6d625-d437-49d6-a38e-d7fdab3c102b
I0330 07:41:25.302] I0330 06:59:06.664425      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.302] STEP: Creating a pod to test consume secrets
I0330 07:41:25.303] Mar 30 06:59:06.676: INFO: Waiting up to 5m0s for pod "pod-secrets-f5038a80-6ef5-431a-9680-199aac8b6d72" in namespace "secrets-1677" to be "Succeeded or Failed"
I0330 07:41:25.303] Mar 30 06:59:06.679: INFO: Pod "pod-secrets-f5038a80-6ef5-431a-9680-199aac8b6d72": Phase="Pending", Reason="", readiness=false. Elapsed: 3.422781ms
I0330 07:41:25.303] Mar 30 06:59:08.688: INFO: Pod "pod-secrets-f5038a80-6ef5-431a-9680-199aac8b6d72": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012000966s
I0330 07:41:25.303] STEP: Saw pod success
I0330 07:41:25.303] Mar 30 06:59:08.688: INFO: Pod "pod-secrets-f5038a80-6ef5-431a-9680-199aac8b6d72" satisfied condition "Succeeded or Failed"
I0330 07:41:25.303] Mar 30 06:59:08.691: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-f5038a80-6ef5-431a-9680-199aac8b6d72 container secret-env-test: <nil>
I0330 07:41:25.303] STEP: delete the pod
I0330 07:41:25.304] Mar 30 06:59:08.715: INFO: Waiting for pod pod-secrets-f5038a80-6ef5-431a-9680-199aac8b6d72 to disappear
I0330 07:41:25.304] Mar 30 06:59:08.718: INFO: Pod pod-secrets-f5038a80-6ef5-431a-9680-199aac8b6d72 no longer exists
I0330 07:41:25.304] [AfterEach] [sig-node] Secrets
I0330 07:41:25.304]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.304] Mar 30 06:59:08.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.304] STEP: Destroying namespace "secrets-1677" for this suite.
I0330 07:41:25.305] •{"msg":"PASSED [sig-node] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":340,"completed":228,"skipped":3394,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.305] SSSSSSSSSSSS
I0330 07:41:25.305] ------------------------------
I0330 07:41:25.305] [sig-apps] ReplicationController 
I0330 07:41:25.305]   should surface a failure condition on a common issue like exceeded quota [Conformance]
I0330 07:41:25.305]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.305] [BeforeEach] [sig-apps] ReplicationController
... skipping 19 lines ...
I0330 07:41:25.308] Mar 30 06:59:10.797: INFO: Updating replication controller "condition-test"
I0330 07:41:25.308] STEP: Checking rc "condition-test" has no failure condition set
I0330 07:41:25.308] [AfterEach] [sig-apps] ReplicationController
I0330 07:41:25.308]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.308] Mar 30 06:59:11.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.309] STEP: Destroying namespace "replication-controller-5909" for this suite.
I0330 07:41:25.309] •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":340,"completed":229,"skipped":3406,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.309] SSSSSSSSSSSSSSSSS
I0330 07:41:25.309] ------------------------------
I0330 07:41:25.309] [sig-cli] Kubectl client Kubectl api-versions 
I0330 07:41:25.309]   should check if v1 is in available api versions  [Conformance]
I0330 07:41:25.309]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.309] [BeforeEach] [sig-cli] Kubectl client
... skipping 17 lines ...
I0330 07:41:25.312] Mar 30 06:59:11.934: INFO: stderr: ""
I0330 07:41:25.313] Mar 30 06:59:11.934: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nflowcontrol.apiserver.k8s.io/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1\nnode.k8s.io/v1beta1\npolicy/v1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
I0330 07:41:25.313] [AfterEach] [sig-cli] Kubectl client
I0330 07:41:25.313]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.313] Mar 30 06:59:11.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.313] STEP: Destroying namespace "kubectl-6451" for this suite.
I0330 07:41:25.313] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":340,"completed":230,"skipped":3423,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.313] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.313] ------------------------------
I0330 07:41:25.314] [sig-node] Container Runtime blackbox test on terminated container 
I0330 07:41:25.314]   should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
I0330 07:41:25.314]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.314] [BeforeEach] [sig-node] Container Runtime
... skipping 18 lines ...
I0330 07:41:25.317] Mar 30 06:59:13.990: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
I0330 07:41:25.317] STEP: delete the container
I0330 07:41:25.317] [AfterEach] [sig-node] Container Runtime
I0330 07:41:25.317]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.317] Mar 30 06:59:14.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.317] STEP: Destroying namespace "container-runtime-6580" for this suite.
I0330 07:41:25.317] •{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":340,"completed":231,"skipped":3468,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.318] SSSSSSSS
I0330 07:41:25.318] ------------------------------
I0330 07:41:25.318] [sig-apps] Daemon set [Serial] 
I0330 07:41:25.318]   should retry creating failed daemon pods [Conformance]
I0330 07:41:25.318]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.318] [BeforeEach] [sig-apps] Daemon set [Serial]
I0330 07:41:25.318]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:25.318] STEP: Creating a kubernetes client
I0330 07:41:25.318] Mar 30 06:59:14.006: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:25.318] STEP: Building a namespace api object, basename daemonsets
... skipping 3 lines ...
I0330 07:41:25.319] I0330 06:59:14.028985      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.319] I0330 06:59:14.029131      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.320] I0330 06:59:14.029143      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.320] [BeforeEach] [sig-apps] Daemon set [Serial]
I0330 07:41:25.320]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:135
I0330 07:41:25.320] I0330 06:59:14.031720      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.320] [It] should retry creating failed daemon pods [Conformance]
I0330 07:41:25.320]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.320] STEP: Creating a simple DaemonSet "daemon-set"
I0330 07:41:25.320] STEP: Check that daemon pods launch on every node of the cluster.
I0330 07:41:25.321] Mar 30 06:59:14.055: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0330 07:41:25.321] Mar 30 06:59:14.057: INFO: Number of nodes with available pods: 0
I0330 07:41:25.321] Mar 30 06:59:14.057: INFO: Node kind-worker is running more than one daemon pod
I0330 07:41:25.321] Mar 30 06:59:15.063: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0330 07:41:25.321] Mar 30 06:59:15.066: INFO: Number of nodes with available pods: 1
I0330 07:41:25.321] Mar 30 06:59:15.066: INFO: Node kind-worker is running more than one daemon pod
I0330 07:41:25.321] Mar 30 06:59:16.063: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0330 07:41:25.322] Mar 30 06:59:16.066: INFO: Number of nodes with available pods: 2
I0330 07:41:25.322] Mar 30 06:59:16.066: INFO: Number of running nodes: 2, number of available pods: 2
I0330 07:41:25.322] STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
I0330 07:41:25.322] Mar 30 06:59:16.082: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0330 07:41:25.322] Mar 30 06:59:16.087: INFO: Number of nodes with available pods: 1
I0330 07:41:25.322] Mar 30 06:59:16.087: INFO: Node kind-worker is running more than one daemon pod
I0330 07:41:25.322] Mar 30 06:59:17.099: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0330 07:41:25.323] Mar 30 06:59:17.102: INFO: Number of nodes with available pods: 1
I0330 07:41:25.323] Mar 30 06:59:17.102: INFO: Node kind-worker is running more than one daemon pod
I0330 07:41:25.323] Mar 30 06:59:18.093: INFO: DaemonSet pods can't tolerate node kind-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:<nil>}], skip checking this node
I0330 07:41:25.323] Mar 30 06:59:18.096: INFO: Number of nodes with available pods: 2
I0330 07:41:25.323] Mar 30 06:59:18.097: INFO: Number of running nodes: 2, number of available pods: 2
I0330 07:41:25.324] STEP: Wait for the failed daemon pod to be completely deleted.
I0330 07:41:25.324] [AfterEach] [sig-apps] Daemon set [Serial]
I0330 07:41:25.324]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:101
I0330 07:41:25.324] STEP: Deleting DaemonSet "daemon-set"
I0330 07:41:25.324] STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6945, will wait for the garbage collector to delete the pods
I0330 07:41:25.324] I0330 06:59:18.103888      19 reflector.go:219] Starting reflector *v1.Pod (0s) from k8s.io/kubernetes/test/utils/pod_store.go:57
I0330 07:41:25.324] I0330 06:59:18.103933      19 reflector.go:255] Listing and watching *v1.Pod from k8s.io/kubernetes/test/utils/pod_store.go:57
... skipping 13 lines ...
I0330 07:41:25.327] Mar 30 06:59:23.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.327] STEP: Destroying namespace "daemonsets-6945" for this suite.
I0330 07:41:25.327] 
I0330 07:41:25.327] • [SLOW TEST:9.578 seconds]
I0330 07:41:25.327] [sig-apps] Daemon set [Serial]
I0330 07:41:25.327] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:25.327]   should retry creating failed daemon pods [Conformance]
I0330 07:41:25.328]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.328] ------------------------------
I0330 07:41:25.328] {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":340,"completed":232,"skipped":3476,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.328] SSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.328] ------------------------------
I0330 07:41:25.328] [sig-apps] Daemon set [Serial] 
I0330 07:41:25.328]   should run and stop simple daemon [Conformance]
I0330 07:41:25.328]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.329] [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 67 lines ...
I0330 07:41:25.338] • [SLOW TEST:19.977 seconds]
I0330 07:41:25.338] [sig-apps] Daemon set [Serial]
I0330 07:41:25.338] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:25.338]   should run and stop simple daemon [Conformance]
I0330 07:41:25.338]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.338] ------------------------------
I0330 07:41:25.339] {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":340,"completed":233,"skipped":3501,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.339] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.339] ------------------------------
I0330 07:41:25.339] [sig-node] Probing container 
I0330 07:41:25.339]   should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
I0330 07:41:25.339]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.340] [BeforeEach] [sig-node] Probing container
... skipping 26 lines ...
I0330 07:41:25.344] • [SLOW TEST:242.813 seconds]
I0330 07:41:25.344] [sig-node] Probing container
I0330 07:41:25.344] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:25.344]   should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
I0330 07:41:25.344]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.344] ------------------------------
I0330 07:41:25.345] {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":340,"completed":234,"skipped":3534,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.345] SS
I0330 07:41:25.345] ------------------------------
I0330 07:41:25.345] [sig-network] Services 
I0330 07:41:25.345]   should serve multiport endpoints from pods  [Conformance]
I0330 07:41:25.345]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.345] [BeforeEach] [sig-network] Services
... skipping 11 lines ...
I0330 07:41:25.347] [BeforeEach] [sig-network] Services
I0330 07:41:25.348]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
I0330 07:41:25.348] [It] should serve multiport endpoints from pods  [Conformance]
I0330 07:41:25.348]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.348] STEP: creating service multi-endpoint-test in namespace services-9075
I0330 07:41:25.348] STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9075 to expose endpoints map[]
I0330 07:41:25.349] Mar 30 07:03:46.425: INFO: Failed go get Endpoints object: endpoints "multi-endpoint-test" not found
I0330 07:41:25.349] Mar 30 07:03:47.440: INFO: successfully validated that service multi-endpoint-test in namespace services-9075 exposes endpoints map[]
I0330 07:41:25.349] STEP: Creating pod pod1 in namespace services-9075
I0330 07:41:25.349] Mar 30 07:03:47.448: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
I0330 07:41:25.349] Mar 30 07:03:49.455: INFO: The status of Pod pod1 is Running (Ready = true)
I0330 07:41:25.349] STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9075 to expose endpoints map[pod1:[100]]
I0330 07:41:25.350] Mar 30 07:03:49.465: INFO: successfully validated that service multi-endpoint-test in namespace services-9075 exposes endpoints map[pod1:[100]]
... skipping 18 lines ...
I0330 07:41:25.353] • [SLOW TEST:5.214 seconds]
I0330 07:41:25.353] [sig-network] Services
I0330 07:41:25.353] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:25.353]   should serve multiport endpoints from pods  [Conformance]
I0330 07:41:25.353]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.354] ------------------------------
I0330 07:41:25.354] {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":340,"completed":235,"skipped":3536,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.354] SSSSS
I0330 07:41:25.354] ------------------------------
I0330 07:41:25.354] [sig-cli] Kubectl client Kubectl diff 
I0330 07:41:25.355]   should check if kubectl diff finds a difference for Deployments [Conformance]
I0330 07:41:25.355]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.355] [BeforeEach] [sig-cli] Kubectl client
... skipping 23 lines ...
I0330 07:41:25.359] Mar 30 07:03:52.378: INFO: stderr: ""
I0330 07:41:25.359] Mar 30 07:03:52.378: INFO: stdout: "deployment.apps \"httpd-deployment\" deleted\n"
I0330 07:41:25.359] [AfterEach] [sig-cli] Kubectl client
I0330 07:41:25.359]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.359] Mar 30 07:03:52.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.359] STEP: Destroying namespace "kubectl-9029" for this suite.
I0330 07:41:25.360] •{"msg":"PASSED [sig-cli] Kubectl client Kubectl diff should check if kubectl diff finds a difference for Deployments [Conformance]","total":340,"completed":236,"skipped":3541,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.360] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.360] ------------------------------
I0330 07:41:25.360] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
I0330 07:41:25.360]   custom resource defaulting for requests and from storage works  [Conformance]
I0330 07:41:25.360]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.360] [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
... skipping 12 lines ...
I0330 07:41:25.362] Mar 30 07:03:52.424: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:25.362] I0330 07:03:52.424165      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.363] [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
I0330 07:41:25.363]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.363] Mar 30 07:03:55.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.363] STEP: Destroying namespace "custom-resource-definition-7826" for this suite.
I0330 07:41:25.364] •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":340,"completed":237,"skipped":3586,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.364] SSSSSSSSSSS
I0330 07:41:25.364] ------------------------------
I0330 07:41:25.364] [sig-node] Probing container 
I0330 07:41:25.364]   should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
I0330 07:41:25.364]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.365] [BeforeEach] [sig-node] Probing container
... skipping 26 lines ...
I0330 07:41:25.370] • [SLOW TEST:242.741 seconds]
I0330 07:41:25.370] [sig-node] Probing container
I0330 07:41:25.370] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:25.370]   should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
I0330 07:41:25.371]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.371] ------------------------------
I0330 07:41:25.371] {"msg":"PASSED [sig-node] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":340,"completed":238,"skipped":3597,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.372] SSSSSSSSSSSSSSS
I0330 07:41:25.372] ------------------------------
I0330 07:41:25.372] [sig-storage] EmptyDir volumes 
I0330 07:41:25.372]   should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.372]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.372] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:25.374] I0330 07:07:58.354519      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.374] I0330 07:07:58.354601      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.374] I0330 07:07:58.357267      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.374] [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.375]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.375] STEP: Creating a pod to test emptydir 0666 on node default medium
I0330 07:41:25.375] Mar 30 07:07:58.363: INFO: Waiting up to 5m0s for pod "pod-e55efded-dd06-4a53-9e2d-450c949aeb58" in namespace "emptydir-6026" to be "Succeeded or Failed"
I0330 07:41:25.375] Mar 30 07:07:58.367: INFO: Pod "pod-e55efded-dd06-4a53-9e2d-450c949aeb58": Phase="Pending", Reason="", readiness=false. Elapsed: 3.799155ms
I0330 07:41:25.375] Mar 30 07:08:00.371: INFO: Pod "pod-e55efded-dd06-4a53-9e2d-450c949aeb58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008056698s
I0330 07:41:25.375] STEP: Saw pod success
I0330 07:41:25.376] Mar 30 07:08:00.371: INFO: Pod "pod-e55efded-dd06-4a53-9e2d-450c949aeb58" satisfied condition "Succeeded or Failed"
I0330 07:41:25.376] Mar 30 07:08:00.374: INFO: Trying to get logs from node kind-worker2 pod pod-e55efded-dd06-4a53-9e2d-450c949aeb58 container test-container: <nil>
I0330 07:41:25.376] STEP: delete the pod
I0330 07:41:25.376] Mar 30 07:08:00.400: INFO: Waiting for pod pod-e55efded-dd06-4a53-9e2d-450c949aeb58 to disappear
I0330 07:41:25.376] Mar 30 07:08:00.404: INFO: Pod pod-e55efded-dd06-4a53-9e2d-450c949aeb58 no longer exists
I0330 07:41:25.376] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:25.376]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.377] Mar 30 07:08:00.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.377] STEP: Destroying namespace "emptydir-6026" for this suite.
I0330 07:41:25.377] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":239,"skipped":3612,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.377] SSSSSSS
I0330 07:41:25.377] ------------------------------
I0330 07:41:25.377] [sig-network] EndpointSlice 
I0330 07:41:25.377]   should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
I0330 07:41:25.377]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.378] [BeforeEach] [sig-network] EndpointSlice
... skipping 13 lines ...
I0330 07:41:25.380] [It] should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]
I0330 07:41:25.380]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.380] [AfterEach] [sig-network] EndpointSlice
I0330 07:41:25.380]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.380] Mar 30 07:08:04.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.380] STEP: Destroying namespace "endpointslice-5792" for this suite.
I0330 07:41:25.381] •{"msg":"PASSED [sig-network] EndpointSlice should create and delete Endpoints and EndpointSlices for a Service with a selector specified [Conformance]","total":340,"completed":240,"skipped":3619,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.381] S
I0330 07:41:25.382] ------------------------------
I0330 07:41:25.382] [sig-apps] ReplicaSet 
I0330 07:41:25.382]   Replace and Patch tests [Conformance]
I0330 07:41:25.382]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.382] [BeforeEach] [sig-apps] ReplicaSet
... skipping 33 lines ...
I0330 07:41:25.387] • [SLOW TEST:6.566 seconds]
I0330 07:41:25.387] [sig-apps] ReplicaSet
I0330 07:41:25.387] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:25.387]   Replace and Patch tests [Conformance]
I0330 07:41:25.388]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.388] ------------------------------
I0330 07:41:25.388] {"msg":"PASSED [sig-apps] ReplicaSet Replace and Patch tests [Conformance]","total":340,"completed":241,"skipped":3620,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.388] [sig-storage] Subpath Atomic writer volumes 
I0330 07:41:25.388]   should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
I0330 07:41:25.388]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.388] [BeforeEach] [sig-storage] Subpath
I0330 07:41:25.389]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:25.389] STEP: Creating a kubernetes client
... skipping 10 lines ...
I0330 07:41:25.390]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
I0330 07:41:25.390] STEP: Setting up data
I0330 07:41:25.391] [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
I0330 07:41:25.391]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.391] STEP: Creating pod pod-subpath-test-configmap-9rb6
I0330 07:41:25.391] STEP: Creating a pod to test atomic-volume-subpath
I0330 07:41:25.391] Mar 30 07:08:11.124: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9rb6" in namespace "subpath-9329" to be "Succeeded or Failed"
I0330 07:41:25.391] Mar 30 07:08:11.126: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.481231ms
I0330 07:41:25.391] Mar 30 07:08:13.133: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Running", Reason="", readiness=true. Elapsed: 2.008536492s
I0330 07:41:25.392] Mar 30 07:08:15.138: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Running", Reason="", readiness=true. Elapsed: 4.014484421s
I0330 07:41:25.392] Mar 30 07:08:17.146: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Running", Reason="", readiness=true. Elapsed: 6.022162537s
I0330 07:41:25.392] Mar 30 07:08:19.151: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Running", Reason="", readiness=true. Elapsed: 8.027447264s
I0330 07:41:25.392] Mar 30 07:08:21.157: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Running", Reason="", readiness=true. Elapsed: 10.0327019s
I0330 07:41:25.392] Mar 30 07:08:23.163: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Running", Reason="", readiness=true. Elapsed: 12.039117519s
I0330 07:41:25.392] Mar 30 07:08:25.169: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Running", Reason="", readiness=true. Elapsed: 14.04496262s
I0330 07:41:25.392] Mar 30 07:08:27.174: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Running", Reason="", readiness=true. Elapsed: 16.0502154s
I0330 07:41:25.393] Mar 30 07:08:29.181: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Running", Reason="", readiness=true. Elapsed: 18.056800351s
I0330 07:41:25.393] Mar 30 07:08:31.186: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Running", Reason="", readiness=true. Elapsed: 20.061875372s
I0330 07:41:25.393] Mar 30 07:08:33.191: INFO: Pod "pod-subpath-test-configmap-9rb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.067514076s
I0330 07:41:25.393] STEP: Saw pod success
I0330 07:41:25.393] Mar 30 07:08:33.192: INFO: Pod "pod-subpath-test-configmap-9rb6" satisfied condition "Succeeded or Failed"
I0330 07:41:25.393] Mar 30 07:08:33.194: INFO: Trying to get logs from node kind-worker pod pod-subpath-test-configmap-9rb6 container test-container-subpath-configmap-9rb6: <nil>
I0330 07:41:25.393] STEP: delete the pod
I0330 07:41:25.394] Mar 30 07:08:33.221: INFO: Waiting for pod pod-subpath-test-configmap-9rb6 to disappear
I0330 07:41:25.394] Mar 30 07:08:33.225: INFO: Pod pod-subpath-test-configmap-9rb6 no longer exists
I0330 07:41:25.394] STEP: Deleting pod pod-subpath-test-configmap-9rb6
I0330 07:41:25.394] Mar 30 07:08:33.226: INFO: Deleting pod "pod-subpath-test-configmap-9rb6" in namespace "subpath-9329"
... skipping 7 lines ...
I0330 07:41:25.395] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0330 07:41:25.395]   Atomic writer volumes
I0330 07:41:25.395]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
I0330 07:41:25.396]     should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
I0330 07:41:25.396]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.396] ------------------------------
I0330 07:41:25.396] {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":340,"completed":242,"skipped":3620,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.397] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.397] ------------------------------
I0330 07:41:25.397] [sig-node] Container Runtime blackbox test on terminated container 
I0330 07:41:25.397]   should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0330 07:41:25.398]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.398] [BeforeEach] [sig-node] Container Runtime
... skipping 18 lines ...
I0330 07:41:25.401] Mar 30 07:08:34.292: INFO: Expected: &{} to match Container's Termination Message:  --
I0330 07:41:25.401] STEP: delete the container
I0330 07:41:25.401] [AfterEach] [sig-node] Container Runtime
I0330 07:41:25.401]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.402] Mar 30 07:08:34.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.402] STEP: Destroying namespace "container-runtime-2097" for this suite.
I0330 07:41:25.402] •{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":340,"completed":243,"skipped":3662,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.402] SSSSSSS
I0330 07:41:25.402] ------------------------------
I0330 07:41:25.402] [sig-api-machinery] ResourceQuota 
I0330 07:41:25.403]   should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
I0330 07:41:25.403]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.403] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 21 lines ...
I0330 07:41:25.405] • [SLOW TEST:7.072 seconds]
I0330 07:41:25.406] [sig-api-machinery] ResourceQuota
I0330 07:41:25.406] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.406]   should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
I0330 07:41:25.406]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.406] ------------------------------
I0330 07:41:25.406] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":340,"completed":244,"skipped":3669,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.406] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.406] ------------------------------
I0330 07:41:25.407] [sig-storage] Subpath Atomic writer volumes 
I0330 07:41:25.407]   should support subpaths with secret pod [LinuxOnly] [Conformance]
I0330 07:41:25.407]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.407] [BeforeEach] [sig-storage] Subpath
... skipping 12 lines ...
I0330 07:41:25.409] STEP: Setting up data
I0330 07:41:25.409] I0330 07:08:41.428039      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.409] [It] should support subpaths with secret pod [LinuxOnly] [Conformance]
I0330 07:41:25.409]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.409] STEP: Creating pod pod-subpath-test-secret-cq5p
I0330 07:41:25.409] STEP: Creating a pod to test atomic-volume-subpath
I0330 07:41:25.409] Mar 30 07:08:41.444: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-cq5p" in namespace "subpath-2121" to be "Succeeded or Failed"
I0330 07:41:25.410] Mar 30 07:08:41.450: INFO: Pod "pod-subpath-test-secret-cq5p": Phase="Pending", Reason="", readiness=false. Elapsed: 5.486185ms
I0330 07:41:25.410] Mar 30 07:08:43.456: INFO: Pod "pod-subpath-test-secret-cq5p": Phase="Running", Reason="", readiness=true. Elapsed: 2.011406541s
I0330 07:41:25.410] Mar 30 07:08:45.462: INFO: Pod "pod-subpath-test-secret-cq5p": Phase="Running", Reason="", readiness=true. Elapsed: 4.017787654s
I0330 07:41:25.410] Mar 30 07:08:47.467: INFO: Pod "pod-subpath-test-secret-cq5p": Phase="Running", Reason="", readiness=true. Elapsed: 6.02323153s
I0330 07:41:25.410] Mar 30 07:08:49.473: INFO: Pod "pod-subpath-test-secret-cq5p": Phase="Running", Reason="", readiness=true. Elapsed: 8.028947225s
I0330 07:41:25.411] Mar 30 07:08:51.478: INFO: Pod "pod-subpath-test-secret-cq5p": Phase="Running", Reason="", readiness=true. Elapsed: 10.033720591s
... skipping 2 lines ...
I0330 07:41:25.411] Mar 30 07:08:55.492: INFO: Pod "pod-subpath-test-secret-cq5p": Phase="Running", Reason="", readiness=true. Elapsed: 14.047438999s
I0330 07:41:25.411] Mar 30 07:08:57.498: INFO: Pod "pod-subpath-test-secret-cq5p": Phase="Running", Reason="", readiness=true. Elapsed: 16.053768884s
I0330 07:41:25.411] Mar 30 07:08:59.505: INFO: Pod "pod-subpath-test-secret-cq5p": Phase="Running", Reason="", readiness=true. Elapsed: 18.060603327s
I0330 07:41:25.412] Mar 30 07:09:01.510: INFO: Pod "pod-subpath-test-secret-cq5p": Phase="Running", Reason="", readiness=true. Elapsed: 20.066301661s
I0330 07:41:25.412] Mar 30 07:09:03.516: INFO: Pod "pod-subpath-test-secret-cq5p": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.071964599s
I0330 07:41:25.412] STEP: Saw pod success
I0330 07:41:25.412] Mar 30 07:09:03.516: INFO: Pod "pod-subpath-test-secret-cq5p" satisfied condition "Succeeded or Failed"
I0330 07:41:25.412] Mar 30 07:09:03.519: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-secret-cq5p container test-container-subpath-secret-cq5p: <nil>
I0330 07:41:25.412] STEP: delete the pod
I0330 07:41:25.412] Mar 30 07:09:03.532: INFO: Waiting for pod pod-subpath-test-secret-cq5p to disappear
I0330 07:41:25.412] Mar 30 07:09:03.535: INFO: Pod pod-subpath-test-secret-cq5p no longer exists
I0330 07:41:25.413] STEP: Deleting pod pod-subpath-test-secret-cq5p
I0330 07:41:25.413] Mar 30 07:09:03.535: INFO: Deleting pod "pod-subpath-test-secret-cq5p" in namespace "subpath-2121"
... skipping 7 lines ...
I0330 07:41:25.413] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0330 07:41:25.414]   Atomic writer volumes
I0330 07:41:25.414]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
I0330 07:41:25.414]     should support subpaths with secret pod [LinuxOnly] [Conformance]
I0330 07:41:25.414]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.414] ------------------------------
I0330 07:41:25.414] {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":340,"completed":245,"skipped":3702,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.414] SSSSS
I0330 07:41:25.414] ------------------------------
I0330 07:41:25.415] [sig-cli] Kubectl client Kubectl replace 
I0330 07:41:25.415]   should update a single-container pod's image  [Conformance]
I0330 07:41:25.415]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.415] [BeforeEach] [sig-cli] Kubectl client
... skipping 46 lines ...
I0330 07:41:25.426] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0330 07:41:25.426]   Kubectl replace
I0330 07:41:25.426]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545
I0330 07:41:25.426]     should update a single-container pod's image  [Conformance]
I0330 07:41:25.426]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.426] ------------------------------
I0330 07:41:25.426] {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":340,"completed":246,"skipped":3707,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.426] SSSSSSSSS
I0330 07:41:25.427] ------------------------------
I0330 07:41:25.427] [sig-storage] EmptyDir volumes 
I0330 07:41:25.427]   volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.427]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.427] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:25.429] I0330 07:09:23.543899      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.429] I0330 07:09:23.543918      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.429] [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.429]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.429] I0330 07:09:23.546752      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.429] STEP: Creating a pod to test emptydir volume type on node default medium
I0330 07:41:25.430] Mar 30 07:09:23.554: INFO: Waiting up to 5m0s for pod "pod-2ae8e432-ba54-43b9-a32b-a5038a0267b2" in namespace "emptydir-8724" to be "Succeeded or Failed"
I0330 07:41:25.430] Mar 30 07:09:23.556: INFO: Pod "pod-2ae8e432-ba54-43b9-a32b-a5038a0267b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.68742ms
I0330 07:41:25.430] Mar 30 07:09:25.561: INFO: Pod "pod-2ae8e432-ba54-43b9-a32b-a5038a0267b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00781037s
I0330 07:41:25.431] STEP: Saw pod success
I0330 07:41:25.431] Mar 30 07:09:25.561: INFO: Pod "pod-2ae8e432-ba54-43b9-a32b-a5038a0267b2" satisfied condition "Succeeded or Failed"
I0330 07:41:25.431] Mar 30 07:09:25.564: INFO: Trying to get logs from node kind-worker2 pod pod-2ae8e432-ba54-43b9-a32b-a5038a0267b2 container test-container: <nil>
I0330 07:41:25.431] STEP: delete the pod
I0330 07:41:25.432] Mar 30 07:09:25.579: INFO: Waiting for pod pod-2ae8e432-ba54-43b9-a32b-a5038a0267b2 to disappear
I0330 07:41:25.432] Mar 30 07:09:25.581: INFO: Pod pod-2ae8e432-ba54-43b9-a32b-a5038a0267b2 no longer exists
I0330 07:41:25.432] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:25.432]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.432] Mar 30 07:09:25.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.433] STEP: Destroying namespace "emptydir-8724" for this suite.
I0330 07:41:25.433] •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":247,"skipped":3716,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.433] SSS
I0330 07:41:25.433] ------------------------------
I0330 07:41:25.433] [sig-scheduling] SchedulerPredicates [Serial] 
I0330 07:41:25.434]   validates that NodeSelector is respected if matching  [Conformance]
I0330 07:41:25.434]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.434] [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 38 lines ...
I0330 07:41:25.440] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0330 07:41:25.440]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.440] Mar 30 07:09:29.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.440] STEP: Destroying namespace "sched-pred-7643" for this suite.
I0330 07:41:25.440] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0330 07:41:25.440]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
I0330 07:41:25.440] •I0330 07:09:29.708370      19 request.go:857] Error in request: resource name may not be empty
I0330 07:41:25.441] {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":340,"completed":248,"skipped":3719,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.441] SSSSSSSSSSSSSSSSS
I0330 07:41:25.441] ------------------------------
I0330 07:41:25.441] [sig-storage] Secrets 
I0330 07:41:25.441]   optional updates should be reflected in volume [NodeConformance] [Conformance]
I0330 07:41:25.441]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.441] [BeforeEach] [sig-storage] Secrets
... skipping 20 lines ...
I0330 07:41:25.445] STEP: Creating secret with name s-test-opt-create-8e8f9fdb-7207-4434-b3bc-384846ef36b1
I0330 07:41:25.445] STEP: waiting to observe update in volume
I0330 07:41:25.445] [AfterEach] [sig-storage] Secrets
I0330 07:41:25.445]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.445] Mar 30 07:09:33.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.445] STEP: Destroying namespace "secrets-607" for this suite.
I0330 07:41:25.446] •{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":340,"completed":249,"skipped":3736,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.446] SSSSSSSS
I0330 07:41:25.446] ------------------------------
I0330 07:41:25.446] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0330 07:41:25.446]   updates the published spec when one version gets renamed [Conformance]
I0330 07:41:25.446]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.446] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 24 lines ...
I0330 07:41:25.449] • [SLOW TEST:19.193 seconds]
I0330 07:41:25.450] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0330 07:41:25.450] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.450]   updates the published spec when one version gets renamed [Conformance]
I0330 07:41:25.450]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.450] ------------------------------
I0330 07:41:25.450] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":340,"completed":250,"skipped":3744,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.451] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.451] ------------------------------
I0330 07:41:25.451] [sig-node] Secrets 
I0330 07:41:25.451]   should be consumable via the environment [NodeConformance] [Conformance]
I0330 07:41:25.451]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.452] [BeforeEach] [sig-node] Secrets
... skipping 9 lines ...
I0330 07:41:25.455] I0330 07:09:53.042285      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.455] [It] should be consumable via the environment [NodeConformance] [Conformance]
I0330 07:41:25.455]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.455] I0330 07:09:53.045880      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.456] STEP: creating secret secrets-7303/secret-test-254bade9-0a9b-4eaf-83b9-36c7e885e784
I0330 07:41:25.456] STEP: Creating a pod to test consume secrets
I0330 07:41:25.456] Mar 30 07:09:53.056: INFO: Waiting up to 5m0s for pod "pod-configmaps-018e21e1-5f35-434d-98ca-a82fc25ee0a5" in namespace "secrets-7303" to be "Succeeded or Failed"
I0330 07:41:25.456] Mar 30 07:09:53.059: INFO: Pod "pod-configmaps-018e21e1-5f35-434d-98ca-a82fc25ee0a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.231311ms
I0330 07:41:25.457] Mar 30 07:09:55.064: INFO: Pod "pod-configmaps-018e21e1-5f35-434d-98ca-a82fc25ee0a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008260225s
I0330 07:41:25.457] STEP: Saw pod success
I0330 07:41:25.457] Mar 30 07:09:55.064: INFO: Pod "pod-configmaps-018e21e1-5f35-434d-98ca-a82fc25ee0a5" satisfied condition "Succeeded or Failed"
I0330 07:41:25.457] Mar 30 07:09:55.067: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-018e21e1-5f35-434d-98ca-a82fc25ee0a5 container env-test: <nil>
I0330 07:41:25.457] STEP: delete the pod
I0330 07:41:25.458] Mar 30 07:09:55.081: INFO: Waiting for pod pod-configmaps-018e21e1-5f35-434d-98ca-a82fc25ee0a5 to disappear
I0330 07:41:25.458] Mar 30 07:09:55.084: INFO: Pod pod-configmaps-018e21e1-5f35-434d-98ca-a82fc25ee0a5 no longer exists
I0330 07:41:25.458] [AfterEach] [sig-node] Secrets
I0330 07:41:25.458]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.458] Mar 30 07:09:55.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.459] STEP: Destroying namespace "secrets-7303" for this suite.
I0330 07:41:25.459] •{"msg":"PASSED [sig-node] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":340,"completed":251,"skipped":3785,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.459] SSSSS
I0330 07:41:25.459] ------------------------------
I0330 07:41:25.459] [sig-node] Pods 
I0330 07:41:25.460]   should contain environment variables for services [NodeConformance] [Conformance]
I0330 07:41:25.460]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.460] [BeforeEach] [sig-node] Pods
... skipping 11 lines ...
I0330 07:41:25.463]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/pods.go:187
I0330 07:41:25.463] [It] should contain environment variables for services [NodeConformance] [Conformance]
I0330 07:41:25.463]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.463] I0330 07:09:55.119690      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.463] Mar 30 07:09:55.128: INFO: The status of Pod server-envvars-f27856fb-9910-43fa-9f8c-159c558add63 is Pending, waiting for it to be Running (with Ready = true)
I0330 07:41:25.464] Mar 30 07:09:57.133: INFO: The status of Pod server-envvars-f27856fb-9910-43fa-9f8c-159c558add63 is Running (Ready = true)
I0330 07:41:25.464] Mar 30 07:09:57.148: INFO: Waiting up to 5m0s for pod "client-envvars-c7a5b2af-b5cf-4621-9a2c-23be73cea2e8" in namespace "pods-8455" to be "Succeeded or Failed"
I0330 07:41:25.464] Mar 30 07:09:57.153: INFO: Pod "client-envvars-c7a5b2af-b5cf-4621-9a2c-23be73cea2e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.89443ms
I0330 07:41:25.464] Mar 30 07:09:59.157: INFO: Pod "client-envvars-c7a5b2af-b5cf-4621-9a2c-23be73cea2e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008759714s
I0330 07:41:25.464] STEP: Saw pod success
I0330 07:41:25.464] Mar 30 07:09:59.157: INFO: Pod "client-envvars-c7a5b2af-b5cf-4621-9a2c-23be73cea2e8" satisfied condition "Succeeded or Failed"
I0330 07:41:25.465] Mar 30 07:09:59.160: INFO: Trying to get logs from node kind-worker pod client-envvars-c7a5b2af-b5cf-4621-9a2c-23be73cea2e8 container env3cont: <nil>
I0330 07:41:25.465] STEP: delete the pod
I0330 07:41:25.465] Mar 30 07:09:59.174: INFO: Waiting for pod client-envvars-c7a5b2af-b5cf-4621-9a2c-23be73cea2e8 to disappear
I0330 07:41:25.465] Mar 30 07:09:59.177: INFO: Pod client-envvars-c7a5b2af-b5cf-4621-9a2c-23be73cea2e8 no longer exists
I0330 07:41:25.465] [AfterEach] [sig-node] Pods
I0330 07:41:25.465]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.466] Mar 30 07:09:59.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.466] STEP: Destroying namespace "pods-8455" for this suite.
I0330 07:41:25.466] •{"msg":"PASSED [sig-node] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":340,"completed":252,"skipped":3790,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.466] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.466] ------------------------------
I0330 07:41:25.466] [sig-network] Proxy version v1 
I0330 07:41:25.466]   should proxy through a service and a pod  [Conformance]
I0330 07:41:25.467]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.467] [BeforeEach] version v1
... skipping 357 lines ...
I0330 07:41:25.544] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:25.544]   version v1
I0330 07:41:25.544]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:74
I0330 07:41:25.544]     should proxy through a service and a pod  [Conformance]
I0330 07:41:25.545]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.545] ------------------------------
I0330 07:41:25.545] {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":340,"completed":253,"skipped":3829,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.545] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.545] ------------------------------
I0330 07:41:25.545] [sig-api-machinery] Garbage collector 
I0330 07:41:25.545]   should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
I0330 07:41:25.545]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.546] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 14 lines ...
I0330 07:41:25.548] STEP: create the rc2
I0330 07:41:25.548] STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
I0330 07:41:25.548] STEP: delete the rc simpletest-rc-to-be-deleted
I0330 07:41:25.548] STEP: wait for the rc to be deleted
I0330 07:41:25.548] STEP: Gathering metrics
I0330 07:41:25.548] W0330 07:10:23.666982      19 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
I0330 07:41:25.549] Mar 30 07:11:25.681: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
I0330 07:41:25.549] Mar 30 07:11:25.682: INFO: Deleting pod "simpletest-rc-to-be-deleted-2l5fw" in namespace "gc-1514"
I0330 07:41:25.549] Mar 30 07:11:25.694: INFO: Deleting pod "simpletest-rc-to-be-deleted-6n6lc" in namespace "gc-1514"
I0330 07:41:25.549] Mar 30 07:11:25.715: INFO: Deleting pod "simpletest-rc-to-be-deleted-9llpq" in namespace "gc-1514"
I0330 07:41:25.549] Mar 30 07:11:25.731: INFO: Deleting pod "simpletest-rc-to-be-deleted-9slt2" in namespace "gc-1514"
I0330 07:41:25.549] Mar 30 07:11:25.746: INFO: Deleting pod "simpletest-rc-to-be-deleted-c24vz" in namespace "gc-1514"
I0330 07:41:25.549] [AfterEach] [sig-api-machinery] Garbage collector
... skipping 4 lines ...
I0330 07:41:25.550] • [SLOW TEST:72.217 seconds]
I0330 07:41:25.550] [sig-api-machinery] Garbage collector
I0330 07:41:25.550] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.550]   should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
I0330 07:41:25.550]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.551] ------------------------------
I0330 07:41:25.551] {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":340,"completed":254,"skipped":3859,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.551] SSSS
I0330 07:41:25.551] ------------------------------
I0330 07:41:25.551] [sig-network] Service endpoints latency 
I0330 07:41:25.552]   should not be very high  [Conformance]
I0330 07:41:25.552]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.552] [BeforeEach] [sig-network] Service endpoints latency
... skipping 435 lines ...
I0330 07:41:25.618] • [SLOW TEST:9.790 seconds]
I0330 07:41:25.618] [sig-network] Service endpoints latency
I0330 07:41:25.618] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:25.618]   should not be very high  [Conformance]
I0330 07:41:25.618]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.619] ------------------------------
I0330 07:41:25.619] {"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":340,"completed":255,"skipped":3863,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.619] SSSSSSSSSSSS
I0330 07:41:25.619] ------------------------------
I0330 07:41:25.619] [sig-node] InitContainer [NodeConformance] 
I0330 07:41:25.619]   should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0330 07:41:25.620]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.620] [BeforeEach] [sig-node] InitContainer [NodeConformance]
I0330 07:41:25.620]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:25.620] STEP: Creating a kubernetes client
I0330 07:41:25.620] Mar 30 07:11:35.559: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:25.620] STEP: Building a namespace api object, basename init-container
... skipping 2 lines ...
I0330 07:41:25.621] STEP: Waiting for a default service account to be provisioned in namespace
I0330 07:41:25.621] I0330 07:11:35.582112      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.621] I0330 07:11:35.582311      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.621] I0330 07:11:35.582335      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.621] [BeforeEach] [sig-node] InitContainer [NodeConformance]
I0330 07:41:25.622]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/init_container.go:162
I0330 07:41:25.622] [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
I0330 07:41:25.622]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.622] STEP: creating the pod
I0330 07:41:25.622] I0330 07:11:35.584935      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.622] Mar 30 07:11:35.585: INFO: PodSpec: initContainers in spec.initContainers
I0330 07:41:25.622] I0330 07:11:35.591010      19 retrywatcher.go:247] Starting RetryWatcher.
I0330 07:41:25.623] I0330 07:11:38.828590      19 retrywatcher.go:147] "Stopping RetryWatcher."
I0330 07:41:25.623] I0330 07:11:38.828771      19 retrywatcher.go:275] Stopping RetryWatcher.
I0330 07:41:25.623] [AfterEach] [sig-node] InitContainer [NodeConformance]
I0330 07:41:25.623]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.623] Mar 30 07:11:38.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.623] STEP: Destroying namespace "init-container-9140" for this suite.
I0330 07:41:25.624] •{"msg":"PASSED [sig-node] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":340,"completed":256,"skipped":3875,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.624] SSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.624] ------------------------------
I0330 07:41:25.624] [sig-storage] Projected downwardAPI 
I0330 07:41:25.624]   should provide container's memory limit [NodeConformance] [Conformance]
I0330 07:41:25.624]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.624] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 10 lines ...
I0330 07:41:25.626] [BeforeEach] [sig-storage] Projected downwardAPI
I0330 07:41:25.626]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
I0330 07:41:25.626] I0330 07:11:38.873664      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.626] [It] should provide container's memory limit [NodeConformance] [Conformance]
I0330 07:41:25.626]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.626] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:25.626] Mar 30 07:11:38.880: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af1d4e1d-28df-436c-bb9f-20e36b7347f4" in namespace "projected-3581" to be "Succeeded or Failed"
I0330 07:41:25.627] Mar 30 07:11:38.883: INFO: Pod "downwardapi-volume-af1d4e1d-28df-436c-bb9f-20e36b7347f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.754932ms
I0330 07:41:25.627] Mar 30 07:11:40.888: INFO: Pod "downwardapi-volume-af1d4e1d-28df-436c-bb9f-20e36b7347f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00791097s
I0330 07:41:25.627] STEP: Saw pod success
I0330 07:41:25.627] Mar 30 07:11:40.888: INFO: Pod "downwardapi-volume-af1d4e1d-28df-436c-bb9f-20e36b7347f4" satisfied condition "Succeeded or Failed"
I0330 07:41:25.627] Mar 30 07:11:40.897: INFO: Trying to get logs from node kind-worker pod downwardapi-volume-af1d4e1d-28df-436c-bb9f-20e36b7347f4 container client-container: <nil>
I0330 07:41:25.627] STEP: delete the pod
I0330 07:41:25.628] Mar 30 07:11:40.948: INFO: Waiting for pod downwardapi-volume-af1d4e1d-28df-436c-bb9f-20e36b7347f4 to disappear
I0330 07:41:25.628] Mar 30 07:11:40.955: INFO: Pod downwardapi-volume-af1d4e1d-28df-436c-bb9f-20e36b7347f4 no longer exists
I0330 07:41:25.628] [AfterEach] [sig-storage] Projected downwardAPI
I0330 07:41:25.628]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.628] Mar 30 07:11:40.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.628] STEP: Destroying namespace "projected-3581" for this suite.
I0330 07:41:25.628] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":340,"completed":257,"skipped":3899,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.628] 
I0330 07:41:25.629] ------------------------------
I0330 07:41:25.629] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:25.629]   patching/updating a mutating webhook should work [Conformance]
I0330 07:41:25.629]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.629] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 45 lines ...
I0330 07:41:25.636] • [SLOW TEST:12.549 seconds]
I0330 07:41:25.636] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:25.636] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.636]   patching/updating a mutating webhook should work [Conformance]
I0330 07:41:25.636]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.636] ------------------------------
I0330 07:41:25.637] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":340,"completed":258,"skipped":3899,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.637] SSSS
I0330 07:41:25.637] ------------------------------
I0330 07:41:25.637] [sig-storage] Downward API volume 
I0330 07:41:25.637]   should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.637]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.637] [BeforeEach] [sig-storage] Downward API volume
... skipping 10 lines ...
I0330 07:41:25.639] [BeforeEach] [sig-storage] Downward API volume
I0330 07:41:25.639]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
I0330 07:41:25.639] [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.639]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.639] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:25.640] I0330 07:11:53.571877      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.640] Mar 30 07:11:53.580: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d019d5c4-3f13-4d39-8bba-0ddc99c2e9d4" in namespace "downward-api-724" to be "Succeeded or Failed"
I0330 07:41:25.640] Mar 30 07:11:53.584: INFO: Pod "downwardapi-volume-d019d5c4-3f13-4d39-8bba-0ddc99c2e9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.300833ms
I0330 07:41:25.640] Mar 30 07:11:55.591: INFO: Pod "downwardapi-volume-d019d5c4-3f13-4d39-8bba-0ddc99c2e9d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010732064s
I0330 07:41:25.640] STEP: Saw pod success
I0330 07:41:25.640] Mar 30 07:11:55.591: INFO: Pod "downwardapi-volume-d019d5c4-3f13-4d39-8bba-0ddc99c2e9d4" satisfied condition "Succeeded or Failed"
I0330 07:41:25.641] Mar 30 07:11:55.594: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-d019d5c4-3f13-4d39-8bba-0ddc99c2e9d4 container client-container: <nil>
I0330 07:41:25.641] STEP: delete the pod
I0330 07:41:25.641] Mar 30 07:11:55.621: INFO: Waiting for pod downwardapi-volume-d019d5c4-3f13-4d39-8bba-0ddc99c2e9d4 to disappear
I0330 07:41:25.641] Mar 30 07:11:55.625: INFO: Pod downwardapi-volume-d019d5c4-3f13-4d39-8bba-0ddc99c2e9d4 no longer exists
I0330 07:41:25.641] [AfterEach] [sig-storage] Downward API volume
I0330 07:41:25.641]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.641] Mar 30 07:11:55.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.641] STEP: Destroying namespace "downward-api-724" for this suite.
I0330 07:41:25.642] •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":259,"skipped":3903,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.642] SS
I0330 07:41:25.642] ------------------------------
I0330 07:41:25.642] [sig-apps] DisruptionController 
I0330 07:41:25.642]   should create a PodDisruptionBudget [Conformance]
I0330 07:41:25.642]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.642] [BeforeEach] [sig-apps] DisruptionController
... skipping 27 lines ...
I0330 07:41:25.646] • [SLOW TEST:6.102 seconds]
I0330 07:41:25.646] [sig-apps] DisruptionController
I0330 07:41:25.646] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:25.646]   should create a PodDisruptionBudget [Conformance]
I0330 07:41:25.646]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.646] ------------------------------
I0330 07:41:25.647] {"msg":"PASSED [sig-apps] DisruptionController should create a PodDisruptionBudget [Conformance]","total":340,"completed":260,"skipped":3905,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.647] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.647] ------------------------------
I0330 07:41:25.647] [sig-storage] Downward API volume 
I0330 07:41:25.647]   should provide container's memory request [NodeConformance] [Conformance]
I0330 07:41:25.647]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.647] [BeforeEach] [sig-storage] Downward API volume
... skipping 10 lines ...
I0330 07:41:25.649] [BeforeEach] [sig-storage] Downward API volume
I0330 07:41:25.649]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
I0330 07:41:25.649] I0330 07:12:01.774264      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.649] [It] should provide container's memory request [NodeConformance] [Conformance]
I0330 07:41:25.650]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.650] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:25.650] Mar 30 07:12:01.780: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47d89c11-7539-44b9-b3eb-d1223635da38" in namespace "downward-api-6018" to be "Succeeded or Failed"
I0330 07:41:25.650] Mar 30 07:12:01.786: INFO: Pod "downwardapi-volume-47d89c11-7539-44b9-b3eb-d1223635da38": Phase="Pending", Reason="", readiness=false. Elapsed: 5.53023ms
I0330 07:41:25.650] Mar 30 07:12:03.793: INFO: Pod "downwardapi-volume-47d89c11-7539-44b9-b3eb-d1223635da38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012466672s
I0330 07:41:25.650] STEP: Saw pod success
I0330 07:41:25.650] Mar 30 07:12:03.793: INFO: Pod "downwardapi-volume-47d89c11-7539-44b9-b3eb-d1223635da38" satisfied condition "Succeeded or Failed"
I0330 07:41:25.651] Mar 30 07:12:03.796: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-47d89c11-7539-44b9-b3eb-d1223635da38 container client-container: <nil>
I0330 07:41:25.651] STEP: delete the pod
I0330 07:41:25.651] Mar 30 07:12:03.820: INFO: Waiting for pod downwardapi-volume-47d89c11-7539-44b9-b3eb-d1223635da38 to disappear
I0330 07:41:25.651] Mar 30 07:12:03.823: INFO: Pod downwardapi-volume-47d89c11-7539-44b9-b3eb-d1223635da38 no longer exists
I0330 07:41:25.651] [AfterEach] [sig-storage] Downward API volume
I0330 07:41:25.651]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.651] Mar 30 07:12:03.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.651] STEP: Destroying namespace "downward-api-6018" for this suite.
I0330 07:41:25.652] •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":340,"completed":261,"skipped":3956,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.652] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.652] ------------------------------
I0330 07:41:25.652] [sig-storage] Projected configMap 
I0330 07:41:25.652]   should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
I0330 07:41:25.652]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.652] [BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
I0330 07:41:25.654] I0330 07:12:03.863717      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.654] [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
I0330 07:41:25.654]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.654] STEP: Creating configMap with name projected-configmap-test-volume-ddeac317-77d4-4116-afbe-35f30b90c4ef
I0330 07:41:25.654] I0330 07:12:03.866683      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.654] STEP: Creating a pod to test consume configMaps
I0330 07:41:25.654] Mar 30 07:12:03.877: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9ffcef41-b6bb-4217-a9ac-897a4cf7dc5a" in namespace "projected-7260" to be "Succeeded or Failed"
I0330 07:41:25.655] Mar 30 07:12:03.880: INFO: Pod "pod-projected-configmaps-9ffcef41-b6bb-4217-a9ac-897a4cf7dc5a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.575712ms
I0330 07:41:25.655] Mar 30 07:12:05.887: INFO: Pod "pod-projected-configmaps-9ffcef41-b6bb-4217-a9ac-897a4cf7dc5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010461095s
I0330 07:41:25.655] STEP: Saw pod success
I0330 07:41:25.655] Mar 30 07:12:05.887: INFO: Pod "pod-projected-configmaps-9ffcef41-b6bb-4217-a9ac-897a4cf7dc5a" satisfied condition "Succeeded or Failed"
I0330 07:41:25.655] Mar 30 07:12:05.891: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-9ffcef41-b6bb-4217-a9ac-897a4cf7dc5a container agnhost-container: <nil>
I0330 07:41:25.655] STEP: delete the pod
I0330 07:41:25.655] Mar 30 07:12:05.910: INFO: Waiting for pod pod-projected-configmaps-9ffcef41-b6bb-4217-a9ac-897a4cf7dc5a to disappear
I0330 07:41:25.656] Mar 30 07:12:05.913: INFO: Pod pod-projected-configmaps-9ffcef41-b6bb-4217-a9ac-897a4cf7dc5a no longer exists
I0330 07:41:25.656] [AfterEach] [sig-storage] Projected configMap
I0330 07:41:25.656]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.656] Mar 30 07:12:05.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.656] STEP: Destroying namespace "projected-7260" for this suite.
I0330 07:41:25.657] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":340,"completed":262,"skipped":4018,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.657] SSSSS
I0330 07:41:25.657] ------------------------------
I0330 07:41:25.657] [sig-network] DNS 
I0330 07:41:25.657]   should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
I0330 07:41:25.657]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.657] [BeforeEach] [sig-network] DNS
... skipping 22 lines ...
I0330 07:41:25.661] 
I0330 07:41:25.661] STEP: deleting the pod
I0330 07:41:25.661] [AfterEach] [sig-network] DNS
I0330 07:41:25.661]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.661] Mar 30 07:12:08.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.661] STEP: Destroying namespace "dns-6111" for this suite.
I0330 07:41:25.662] •{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":340,"completed":263,"skipped":4023,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.662] 
I0330 07:41:25.662] ------------------------------
I0330 07:41:25.662] [sig-node] Downward API 
I0330 07:41:25.663]   should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
I0330 07:41:25.663]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.663] [BeforeEach] [sig-node] Downward API
... skipping 8 lines ...
I0330 07:41:25.665] I0330 07:12:08.066822      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.665] I0330 07:12:08.066846      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.665] I0330 07:12:08.070401      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.665] [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
I0330 07:41:25.665]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.665] STEP: Creating a pod to test downward api env vars
I0330 07:41:25.666] Mar 30 07:12:08.077: INFO: Waiting up to 5m0s for pod "downward-api-0f76594e-2a09-45dc-916f-421fa2a02d55" in namespace "downward-api-117" to be "Succeeded or Failed"
I0330 07:41:25.666] Mar 30 07:12:08.081: INFO: Pod "downward-api-0f76594e-2a09-45dc-916f-421fa2a02d55": Phase="Pending", Reason="", readiness=false. Elapsed: 3.485961ms
I0330 07:41:25.666] Mar 30 07:12:10.087: INFO: Pod "downward-api-0f76594e-2a09-45dc-916f-421fa2a02d55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009315357s
I0330 07:41:25.666] STEP: Saw pod success
I0330 07:41:25.666] Mar 30 07:12:10.087: INFO: Pod "downward-api-0f76594e-2a09-45dc-916f-421fa2a02d55" satisfied condition "Succeeded or Failed"
I0330 07:41:25.666] Mar 30 07:12:10.090: INFO: Trying to get logs from node kind-worker2 pod downward-api-0f76594e-2a09-45dc-916f-421fa2a02d55 container dapi-container: <nil>
I0330 07:41:25.667] STEP: delete the pod
I0330 07:41:25.667] Mar 30 07:12:10.111: INFO: Waiting for pod downward-api-0f76594e-2a09-45dc-916f-421fa2a02d55 to disappear
I0330 07:41:25.668] Mar 30 07:12:10.115: INFO: Pod downward-api-0f76594e-2a09-45dc-916f-421fa2a02d55 no longer exists
I0330 07:41:25.668] [AfterEach] [sig-node] Downward API
I0330 07:41:25.668]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.668] Mar 30 07:12:10.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.668] STEP: Destroying namespace "downward-api-117" for this suite.
I0330 07:41:25.668] •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":340,"completed":264,"skipped":4023,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.669] SSSSSS
I0330 07:41:25.669] ------------------------------
I0330 07:41:25.669] [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints 
I0330 07:41:25.669]   verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
I0330 07:41:25.669]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.669] [BeforeEach] [sig-scheduling] SchedulerPreemption [Serial]
... skipping 48 lines ...
I0330 07:41:25.678] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
I0330 07:41:25.678]   PriorityClass endpoints
I0330 07:41:25.678]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/preemption.go:673
I0330 07:41:25.678]     verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]
I0330 07:41:25.678]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.678] ------------------------------
I0330 07:41:25.679] {"msg":"PASSED [sig-scheduling] SchedulerPreemption [Serial] PriorityClass endpoints verify PriorityClass endpoints can be operated with different HTTP methods [Conformance]","total":340,"completed":265,"skipped":4029,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.679] SSS
I0330 07:41:25.679] ------------------------------
I0330 07:41:25.679] [sig-node] Kubelet when scheduling a busybox command in a pod 
I0330 07:41:25.679]   should print the output to logs [NodeConformance] [Conformance]
I0330 07:41:25.680]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.680] [BeforeEach] [sig-node] Kubelet
... skipping 15 lines ...
I0330 07:41:25.683] Mar 30 07:13:10.383: INFO: The status of Pod busybox-scheduling-4aeab139-6895-4c1b-827b-bd100478677f is Pending, waiting for it to be Running (with Ready = true)
I0330 07:41:25.683] Mar 30 07:13:12.388: INFO: The status of Pod busybox-scheduling-4aeab139-6895-4c1b-827b-bd100478677f is Running (Ready = true)
I0330 07:41:25.683] [AfterEach] [sig-node] Kubelet
I0330 07:41:25.684]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.684] Mar 30 07:13:12.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.684] STEP: Destroying namespace "kubelet-test-5356" for this suite.
I0330 07:41:25.685] •{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":340,"completed":266,"skipped":4032,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.685] SSSSSSSSSSSSSSSS
I0330 07:41:25.685] ------------------------------
I0330 07:41:25.685] [sig-storage] Secrets 
I0330 07:41:25.685]   should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0330 07:41:25.685]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.685] [BeforeEach] [sig-storage] Secrets
... skipping 9 lines ...
I0330 07:41:25.687] I0330 07:13:12.433187      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.687] [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0330 07:41:25.687]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.687] I0330 07:13:12.435901      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.687] STEP: Creating secret with name secret-test-map-9b2752a1-c734-460f-9e75-dc6a06af9916
I0330 07:41:25.687] STEP: Creating a pod to test consume secrets
I0330 07:41:25.688] Mar 30 07:13:12.445: INFO: Waiting up to 5m0s for pod "pod-secrets-3db1a9ac-8451-420f-b843-32e8b5b365c1" in namespace "secrets-8943" to be "Succeeded or Failed"
I0330 07:41:25.688] Mar 30 07:13:12.449: INFO: Pod "pod-secrets-3db1a9ac-8451-420f-b843-32e8b5b365c1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.680918ms
I0330 07:41:25.688] Mar 30 07:13:14.453: INFO: Pod "pod-secrets-3db1a9ac-8451-420f-b843-32e8b5b365c1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008202659s
I0330 07:41:25.688] STEP: Saw pod success
I0330 07:41:25.689] Mar 30 07:13:14.453: INFO: Pod "pod-secrets-3db1a9ac-8451-420f-b843-32e8b5b365c1" satisfied condition "Succeeded or Failed"
I0330 07:41:25.689] Mar 30 07:13:14.458: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-3db1a9ac-8451-420f-b843-32e8b5b365c1 container secret-volume-test: <nil>
I0330 07:41:25.689] STEP: delete the pod
I0330 07:41:25.689] Mar 30 07:13:14.473: INFO: Waiting for pod pod-secrets-3db1a9ac-8451-420f-b843-32e8b5b365c1 to disappear
I0330 07:41:25.689] Mar 30 07:13:14.476: INFO: Pod pod-secrets-3db1a9ac-8451-420f-b843-32e8b5b365c1 no longer exists
I0330 07:41:25.689] [AfterEach] [sig-storage] Secrets
I0330 07:41:25.689]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.689] Mar 30 07:13:14.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.690] STEP: Destroying namespace "secrets-8943" for this suite.
I0330 07:41:25.690] •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":340,"completed":267,"skipped":4048,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.690] SSSSS
I0330 07:41:25.690] ------------------------------
I0330 07:41:25.690] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0330 07:41:25.690]   works for multiple CRDs of same group and version but different kinds [Conformance]
I0330 07:41:25.690]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.690] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 21 lines ...
I0330 07:41:25.694] • [SLOW TEST:17.248 seconds]
I0330 07:41:25.694] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0330 07:41:25.694] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.694]   works for multiple CRDs of same group and version but different kinds [Conformance]
I0330 07:41:25.694]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.694] ------------------------------
I0330 07:41:25.695] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":340,"completed":268,"skipped":4053,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.695] SSSS
I0330 07:41:25.695] ------------------------------
I0330 07:41:25.695] [sig-api-machinery] ResourceQuota 
I0330 07:41:25.695]   should create a ResourceQuota and capture the life of a configMap. [Conformance]
I0330 07:41:25.695]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.695] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 25 lines ...
I0330 07:41:25.700] • [SLOW TEST:28.103 seconds]
I0330 07:41:25.700] [sig-api-machinery] ResourceQuota
I0330 07:41:25.701] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.701]   should create a ResourceQuota and capture the life of a configMap. [Conformance]
I0330 07:41:25.701]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.701] ------------------------------
I0330 07:41:25.702] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":340,"completed":269,"skipped":4057,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.702] SSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.702] ------------------------------
I0330 07:41:25.702] [sig-storage] Downward API volume 
I0330 07:41:25.702]   should update annotations on modification [NodeConformance] [Conformance]
I0330 07:41:25.703]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.703] [BeforeEach] [sig-storage] Downward API volume
... skipping 24 lines ...
I0330 07:41:25.706] • [SLOW TEST:6.613 seconds]
I0330 07:41:25.706] [sig-storage] Downward API volume
I0330 07:41:25.707] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
I0330 07:41:25.707]   should update annotations on modification [NodeConformance] [Conformance]
I0330 07:41:25.707]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.707] ------------------------------
I0330 07:41:25.707] {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":340,"completed":270,"skipped":4077,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.707] SSSSSS
I0330 07:41:25.707] ------------------------------
I0330 07:41:25.707] [sig-storage] Secrets 
I0330 07:41:25.708]   should be consumable from pods in volume [NodeConformance] [Conformance]
I0330 07:41:25.708]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.708] [BeforeEach] [sig-storage] Secrets
... skipping 9 lines ...
I0330 07:41:25.709] I0330 07:14:06.478245      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.709] I0330 07:14:06.481077      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.710] [It] should be consumable from pods in volume [NodeConformance] [Conformance]
I0330 07:41:25.710]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.710] STEP: Creating secret with name secret-test-37362e04-2114-4084-a6d7-d6a585a4b3d1
I0330 07:41:25.710] STEP: Creating a pod to test consume secrets
I0330 07:41:25.710] Mar 30 07:14:06.492: INFO: Waiting up to 5m0s for pod "pod-secrets-d89e5674-cf8e-4bcf-96e0-5547901dc375" in namespace "secrets-7894" to be "Succeeded or Failed"
I0330 07:41:25.710] Mar 30 07:14:06.495: INFO: Pod "pod-secrets-d89e5674-cf8e-4bcf-96e0-5547901dc375": Phase="Pending", Reason="", readiness=false. Elapsed: 2.434874ms
I0330 07:41:25.710] Mar 30 07:14:08.503: INFO: Pod "pod-secrets-d89e5674-cf8e-4bcf-96e0-5547901dc375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01051104s
I0330 07:41:25.711] STEP: Saw pod success
I0330 07:41:25.711] Mar 30 07:14:08.503: INFO: Pod "pod-secrets-d89e5674-cf8e-4bcf-96e0-5547901dc375" satisfied condition "Succeeded or Failed"
I0330 07:41:25.711] Mar 30 07:14:08.506: INFO: Trying to get logs from node kind-worker pod pod-secrets-d89e5674-cf8e-4bcf-96e0-5547901dc375 container secret-volume-test: <nil>
I0330 07:41:25.711] STEP: delete the pod
I0330 07:41:25.711] Mar 30 07:14:08.529: INFO: Waiting for pod pod-secrets-d89e5674-cf8e-4bcf-96e0-5547901dc375 to disappear
I0330 07:41:25.711] Mar 30 07:14:08.531: INFO: Pod pod-secrets-d89e5674-cf8e-4bcf-96e0-5547901dc375 no longer exists
I0330 07:41:25.711] [AfterEach] [sig-storage] Secrets
I0330 07:41:25.711]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.712] Mar 30 07:14:08.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.712] STEP: Destroying namespace "secrets-7894" for this suite.
I0330 07:41:25.712] •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":340,"completed":271,"skipped":4083,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.712] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.712] ------------------------------
I0330 07:41:25.712] [sig-node] Docker Containers 
I0330 07:41:25.713]   should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
I0330 07:41:25.713]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.713] [BeforeEach] [sig-node] Docker Containers
... skipping 8 lines ...
I0330 07:41:25.714] I0330 07:14:08.568161      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.714] I0330 07:14:08.568177      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.714] I0330 07:14:08.570545      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.714] [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
I0330 07:41:25.715]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.715] STEP: Creating a pod to test override command
I0330 07:41:25.715] Mar 30 07:14:08.576: INFO: Waiting up to 5m0s for pod "client-containers-a4b59b9c-6bf4-4305-99e1-58e4dc7827e5" in namespace "containers-696" to be "Succeeded or Failed"
I0330 07:41:25.715] Mar 30 07:14:08.579: INFO: Pod "client-containers-a4b59b9c-6bf4-4305-99e1-58e4dc7827e5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.331779ms
I0330 07:41:25.715] Mar 30 07:14:10.587: INFO: Pod "client-containers-a4b59b9c-6bf4-4305-99e1-58e4dc7827e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.011206105s
I0330 07:41:25.715] STEP: Saw pod success
I0330 07:41:25.716] Mar 30 07:14:10.587: INFO: Pod "client-containers-a4b59b9c-6bf4-4305-99e1-58e4dc7827e5" satisfied condition "Succeeded or Failed"
I0330 07:41:25.716] Mar 30 07:14:10.590: INFO: Trying to get logs from node kind-worker pod client-containers-a4b59b9c-6bf4-4305-99e1-58e4dc7827e5 container agnhost-container: <nil>
I0330 07:41:25.716] STEP: delete the pod
I0330 07:41:25.716] Mar 30 07:14:10.605: INFO: Waiting for pod client-containers-a4b59b9c-6bf4-4305-99e1-58e4dc7827e5 to disappear
I0330 07:41:25.716] Mar 30 07:14:10.608: INFO: Pod client-containers-a4b59b9c-6bf4-4305-99e1-58e4dc7827e5 no longer exists
I0330 07:41:25.716] [AfterEach] [sig-node] Docker Containers
I0330 07:41:25.716]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.717] Mar 30 07:14:10.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.717] STEP: Destroying namespace "containers-696" for this suite.
I0330 07:41:25.717] •{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":340,"completed":272,"skipped":4115,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.717] SSSSS
I0330 07:41:25.717] ------------------------------
I0330 07:41:25.717] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0330 07:41:25.717]   works for CRD without validation schema [Conformance]
I0330 07:41:25.718]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.718] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 36 lines ...
I0330 07:41:25.725] • [SLOW TEST:8.233 seconds]
I0330 07:41:25.725] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0330 07:41:25.725] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.726]   works for CRD without validation schema [Conformance]
I0330 07:41:25.726]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.726] ------------------------------
I0330 07:41:25.726] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":340,"completed":273,"skipped":4120,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.726] SSSSSSSSSS
I0330 07:41:25.727] ------------------------------
I0330 07:41:25.727] [sig-storage] Projected configMap 
I0330 07:41:25.727]   should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0330 07:41:25.727]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.727] [BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
I0330 07:41:25.729] I0330 07:14:18.880240      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.730] I0330 07:14:18.882968      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.730] [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0330 07:41:25.730]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.730] STEP: Creating configMap with name projected-configmap-test-volume-map-933203ac-d752-4ecd-9a24-d1a035ac6d79
I0330 07:41:25.730] STEP: Creating a pod to test consume configMaps
I0330 07:41:25.731] Mar 30 07:14:18.891: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cbb5cc20-5748-4390-8fc4-d9e80d857f5a" in namespace "projected-9622" to be "Succeeded or Failed"
I0330 07:41:25.731] Mar 30 07:14:18.894: INFO: Pod "pod-projected-configmaps-cbb5cc20-5748-4390-8fc4-d9e80d857f5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.858628ms
I0330 07:41:25.731] Mar 30 07:14:20.898: INFO: Pod "pod-projected-configmaps-cbb5cc20-5748-4390-8fc4-d9e80d857f5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006751019s
I0330 07:41:25.731] STEP: Saw pod success
I0330 07:41:25.732] Mar 30 07:14:20.898: INFO: Pod "pod-projected-configmaps-cbb5cc20-5748-4390-8fc4-d9e80d857f5a" satisfied condition "Succeeded or Failed"
I0330 07:41:25.732] Mar 30 07:14:20.901: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-cbb5cc20-5748-4390-8fc4-d9e80d857f5a container agnhost-container: <nil>
I0330 07:41:25.732] STEP: delete the pod
I0330 07:41:25.732] Mar 30 07:14:20.914: INFO: Waiting for pod pod-projected-configmaps-cbb5cc20-5748-4390-8fc4-d9e80d857f5a to disappear
I0330 07:41:25.732] Mar 30 07:14:20.916: INFO: Pod pod-projected-configmaps-cbb5cc20-5748-4390-8fc4-d9e80d857f5a no longer exists
I0330 07:41:25.732] [AfterEach] [sig-storage] Projected configMap
I0330 07:41:25.732]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.733] Mar 30 07:14:20.917: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.733] STEP: Destroying namespace "projected-9622" for this suite.
I0330 07:41:25.733] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":340,"completed":274,"skipped":4130,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.733] SSSSSSSSSSS
I0330 07:41:25.733] ------------------------------
I0330 07:41:25.733] [sig-node] Docker Containers 
I0330 07:41:25.734]   should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
I0330 07:41:25.734]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.734] [BeforeEach] [sig-node] Docker Containers
... skipping 8 lines ...
I0330 07:41:25.736] I0330 07:14:20.950510      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.736] I0330 07:14:20.950527      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.736] [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
I0330 07:41:25.736]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.737] STEP: Creating a pod to test override arguments
I0330 07:41:25.737] I0330 07:14:20.953621      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.737] Mar 30 07:14:20.962: INFO: Waiting up to 5m0s for pod "client-containers-41641492-8247-4014-9122-a6af2e482e7b" in namespace "containers-9728" to be "Succeeded or Failed"
I0330 07:41:25.737] Mar 30 07:14:20.965: INFO: Pod "client-containers-41641492-8247-4014-9122-a6af2e482e7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.949961ms
I0330 07:41:25.737] Mar 30 07:14:22.971: INFO: Pod "client-containers-41641492-8247-4014-9122-a6af2e482e7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008207435s
I0330 07:41:25.737] STEP: Saw pod success
I0330 07:41:25.737] Mar 30 07:14:22.971: INFO: Pod "client-containers-41641492-8247-4014-9122-a6af2e482e7b" satisfied condition "Succeeded or Failed"
I0330 07:41:25.738] Mar 30 07:14:22.974: INFO: Trying to get logs from node kind-worker2 pod client-containers-41641492-8247-4014-9122-a6af2e482e7b container agnhost-container: <nil>
I0330 07:41:25.738] STEP: delete the pod
I0330 07:41:25.738] Mar 30 07:14:22.986: INFO: Waiting for pod client-containers-41641492-8247-4014-9122-a6af2e482e7b to disappear
I0330 07:41:25.738] Mar 30 07:14:22.988: INFO: Pod client-containers-41641492-8247-4014-9122-a6af2e482e7b no longer exists
I0330 07:41:25.738] [AfterEach] [sig-node] Docker Containers
I0330 07:41:25.738]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.738] Mar 30 07:14:22.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.738] STEP: Destroying namespace "containers-9728" for this suite.
I0330 07:41:25.739] •{"msg":"PASSED [sig-node] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":340,"completed":275,"skipped":4141,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.739] SS
I0330 07:41:25.739] ------------------------------
I0330 07:41:25.739] [sig-network] Networking Granular Checks: Pods 
I0330 07:41:25.739]   should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.739]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.740] [BeforeEach] [sig-network] Networking
... skipping 42 lines ...
I0330 07:41:25.745] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
I0330 07:41:25.745]   Granular Checks: Pods
I0330 07:41:25.746]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
I0330 07:41:25.746]     should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.746]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.746] ------------------------------
I0330 07:41:25.746] {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":276,"skipped":4143,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.746] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.746] ------------------------------
I0330 07:41:25.747] [sig-node] Security Context 
I0330 07:41:25.747]   should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
I0330 07:41:25.747]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.747] [BeforeEach] [sig-node] Security Context
... skipping 8 lines ...
I0330 07:41:25.748] I0330 07:14:39.277757      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.748] I0330 07:14:39.277777      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.749] [It] should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
I0330 07:41:25.749]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.749] I0330 07:14:39.280214      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.749] STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
I0330 07:41:25.749] Mar 30 07:14:39.285: INFO: Waiting up to 5m0s for pod "security-context-189a0a22-733c-4798-8a9b-9f5cebf34d31" in namespace "security-context-4677" to be "Succeeded or Failed"
I0330 07:41:25.749] Mar 30 07:14:39.288: INFO: Pod "security-context-189a0a22-733c-4798-8a9b-9f5cebf34d31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.365176ms
I0330 07:41:25.750] Mar 30 07:14:41.296: INFO: Pod "security-context-189a0a22-733c-4798-8a9b-9f5cebf34d31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010632573s
I0330 07:41:25.750] STEP: Saw pod success
I0330 07:41:25.750] Mar 30 07:14:41.296: INFO: Pod "security-context-189a0a22-733c-4798-8a9b-9f5cebf34d31" satisfied condition "Succeeded or Failed"
I0330 07:41:25.750] Mar 30 07:14:41.299: INFO: Trying to get logs from node kind-worker2 pod security-context-189a0a22-733c-4798-8a9b-9f5cebf34d31 container test-container: <nil>
I0330 07:41:25.750] STEP: delete the pod
I0330 07:41:25.750] Mar 30 07:14:41.313: INFO: Waiting for pod security-context-189a0a22-733c-4798-8a9b-9f5cebf34d31 to disappear
I0330 07:41:25.750] Mar 30 07:14:41.316: INFO: Pod security-context-189a0a22-733c-4798-8a9b-9f5cebf34d31 no longer exists
I0330 07:41:25.750] [AfterEach] [sig-node] Security Context
I0330 07:41:25.751]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.751] Mar 30 07:14:41.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.751] STEP: Destroying namespace "security-context-4677" for this suite.
I0330 07:41:25.751] •{"msg":"PASSED [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":340,"completed":277,"skipped":4173,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.751] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.751] ------------------------------
I0330 07:41:25.751] [sig-auth] ServiceAccounts 
I0330 07:41:25.751]   should guarantee kube-root-ca.crt exist in any namespace [Conformance]
I0330 07:41:25.752]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.752] [BeforeEach] [sig-auth] ServiceAccounts
... skipping 18 lines ...
I0330 07:41:25.754] STEP: waiting for the root ca configmap reconciled
I0330 07:41:25.754] Mar 30 07:14:42.371: INFO: Reconciled root ca configmap in namespace "svcaccounts-7104"
I0330 07:41:25.754] [AfterEach] [sig-auth] ServiceAccounts
I0330 07:41:25.754]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.755] Mar 30 07:14:42.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.755] STEP: Destroying namespace "svcaccounts-7104" for this suite.
I0330 07:41:25.755] •{"msg":"PASSED [sig-auth] ServiceAccounts should guarantee kube-root-ca.crt exist in any namespace [Conformance]","total":340,"completed":278,"skipped":4211,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.755] SSSSSSSSSSS
I0330 07:41:25.755] ------------------------------
I0330 07:41:25.755] [sig-auth] ServiceAccounts 
I0330 07:41:25.755]   should mount an API token into pods  [Conformance]
I0330 07:41:25.755]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.755] [BeforeEach] [sig-auth] ServiceAccounts
... skipping 19 lines ...
I0330 07:41:25.759] STEP: reading a file in the container
I0330 07:41:25.759] Mar 30 07:14:45.259: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4451 pod-service-account-9463c773-ff93-450d-aabd-68cd57dd8e1c -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
I0330 07:41:25.759] [AfterEach] [sig-auth] ServiceAccounts
I0330 07:41:25.759]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.759] Mar 30 07:14:45.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.759] STEP: Destroying namespace "svcaccounts-4451" for this suite.
I0330 07:41:25.760] •{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":340,"completed":279,"skipped":4222,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.760] SSSSSSSSS
I0330 07:41:25.760] ------------------------------
I0330 07:41:25.760] [sig-node] ConfigMap 
I0330 07:41:25.760]   should be consumable via the environment [NodeConformance] [Conformance]
I0330 07:41:25.760]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.760] [BeforeEach] [sig-node] ConfigMap
... skipping 9 lines ...
I0330 07:41:25.762] I0330 07:14:45.455684      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.762] [It] should be consumable via the environment [NodeConformance] [Conformance]
I0330 07:41:25.762]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.762] I0330 07:14:45.458229      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.762] STEP: Creating configMap configmap-7434/configmap-test-d7dddcf3-d297-414c-a3a9-60e26139b5d0
I0330 07:41:25.762] STEP: Creating a pod to test consume configMaps
I0330 07:41:25.762] Mar 30 07:14:45.466: INFO: Waiting up to 5m0s for pod "pod-configmaps-e70a67e6-c750-4b4b-b0ff-e2755da4bb9e" in namespace "configmap-7434" to be "Succeeded or Failed"
I0330 07:41:25.763] Mar 30 07:14:45.468: INFO: Pod "pod-configmaps-e70a67e6-c750-4b4b-b0ff-e2755da4bb9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.758486ms
I0330 07:41:25.763] Mar 30 07:14:47.472: INFO: Pod "pod-configmaps-e70a67e6-c750-4b4b-b0ff-e2755da4bb9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006679246s
I0330 07:41:25.764] STEP: Saw pod success
I0330 07:41:25.764] Mar 30 07:14:47.472: INFO: Pod "pod-configmaps-e70a67e6-c750-4b4b-b0ff-e2755da4bb9e" satisfied condition "Succeeded or Failed"
I0330 07:41:25.764] Mar 30 07:14:47.475: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-e70a67e6-c750-4b4b-b0ff-e2755da4bb9e container env-test: <nil>
I0330 07:41:25.765] STEP: delete the pod
I0330 07:41:25.765] Mar 30 07:14:47.490: INFO: Waiting for pod pod-configmaps-e70a67e6-c750-4b4b-b0ff-e2755da4bb9e to disappear
I0330 07:41:25.765] Mar 30 07:14:47.495: INFO: Pod pod-configmaps-e70a67e6-c750-4b4b-b0ff-e2755da4bb9e no longer exists
I0330 07:41:25.765] [AfterEach] [sig-node] ConfigMap
I0330 07:41:25.765]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.766] Mar 30 07:14:47.495: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.766] STEP: Destroying namespace "configmap-7434" for this suite.
I0330 07:41:25.766] •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":340,"completed":280,"skipped":4231,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.766] SSSSSSSSSSSSSSSSSSS
I0330 07:41:25.767] ------------------------------
I0330 07:41:25.767] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0330 07:41:25.767]   works for CRD with validation schema [Conformance]
I0330 07:41:25.768]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.768] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 45 lines ...
I0330 07:41:25.785] Mar 30 07:14:53.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-963336331 --namespace=crd-publish-openapi-9647 explain e2e-test-crd-publish-openapi-7786-crds.spec'
I0330 07:41:25.786] Mar 30 07:14:54.022: INFO: stderr: ""
I0330 07:41:25.786] Mar 30 07:14:54.022: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7786-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec <Object>\n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
I0330 07:41:25.786] Mar 30 07:14:54.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-963336331 --namespace=crd-publish-openapi-9647 explain e2e-test-crd-publish-openapi-7786-crds.spec.bars'
I0330 07:41:25.786] Mar 30 07:14:54.292: INFO: stderr: ""
I0330 07:41:25.787] Mar 30 07:14:54.292: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7786-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t<string>\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t<string> -required-\n     Name of Bar.\n\n"
I0330 07:41:25.787] STEP: kubectl explain works to return error when explain is called on property that doesn't exist
I0330 07:41:25.787] Mar 30 07:14:54.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/tmp/kubeconfig-963336331 --namespace=crd-publish-openapi-9647 explain e2e-test-crd-publish-openapi-7786-crds.spec.bars2'
I0330 07:41:25.787] Mar 30 07:14:54.569: INFO: rc: 1
I0330 07:41:25.788] [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0330 07:41:25.788]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.788] Mar 30 07:14:58.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.788] STEP: Destroying namespace "crd-publish-openapi-9647" for this suite.
I0330 07:41:25.788] 
I0330 07:41:25.788] • [SLOW TEST:10.612 seconds]
I0330 07:41:25.788] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0330 07:41:25.789] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.789]   works for CRD with validation schema [Conformance]
I0330 07:41:25.789]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.789] ------------------------------
I0330 07:41:25.789] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":340,"completed":281,"skipped":4250,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.789] SSSSSSSSSSSSSS
I0330 07:41:25.789] ------------------------------
I0330 07:41:25.789] [sig-api-machinery] ResourceQuota 
I0330 07:41:25.790]   should create a ResourceQuota and capture the life of a replication controller. [Conformance]
I0330 07:41:25.790]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.790] [BeforeEach] [sig-api-machinery] ResourceQuota
... skipping 25 lines ...
I0330 07:41:25.793] • [SLOW TEST:11.087 seconds]
I0330 07:41:25.794] [sig-api-machinery] ResourceQuota
I0330 07:41:25.794] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.794]   should create a ResourceQuota and capture the life of a replication controller. [Conformance]
I0330 07:41:25.794]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.794] ------------------------------
I0330 07:41:25.794] {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":340,"completed":282,"skipped":4264,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.795] SSSSSS
I0330 07:41:25.795] ------------------------------
I0330 07:41:25.795] [sig-storage] EmptyDir volumes 
I0330 07:41:25.795]   volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.795]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.795] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:25.797] I0330 07:15:09.229621      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.797] I0330 07:15:09.229708      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.797] [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.797]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.797] STEP: Creating a pod to test emptydir volume type on tmpfs
I0330 07:41:25.797] I0330 07:15:09.232462      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.798] Mar 30 07:15:09.238: INFO: Waiting up to 5m0s for pod "pod-eed16358-7903-4a00-9fb1-8dd92b4ea17c" in namespace "emptydir-1520" to be "Succeeded or Failed"
I0330 07:41:25.798] Mar 30 07:15:09.241: INFO: Pod "pod-eed16358-7903-4a00-9fb1-8dd92b4ea17c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.302222ms
I0330 07:41:25.798] Mar 30 07:15:11.246: INFO: Pod "pod-eed16358-7903-4a00-9fb1-8dd92b4ea17c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008225062s
I0330 07:41:25.798] STEP: Saw pod success
I0330 07:41:25.798] Mar 30 07:15:11.246: INFO: Pod "pod-eed16358-7903-4a00-9fb1-8dd92b4ea17c" satisfied condition "Succeeded or Failed"
I0330 07:41:25.798] Mar 30 07:15:11.249: INFO: Trying to get logs from node kind-worker2 pod pod-eed16358-7903-4a00-9fb1-8dd92b4ea17c container test-container: <nil>
I0330 07:41:25.798] STEP: delete the pod
I0330 07:41:25.799] Mar 30 07:15:11.263: INFO: Waiting for pod pod-eed16358-7903-4a00-9fb1-8dd92b4ea17c to disappear
I0330 07:41:25.799] Mar 30 07:15:11.265: INFO: Pod pod-eed16358-7903-4a00-9fb1-8dd92b4ea17c no longer exists
I0330 07:41:25.799] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:25.799]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.799] Mar 30 07:15:11.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.799] STEP: Destroying namespace "emptydir-1520" for this suite.
I0330 07:41:25.799] •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":283,"skipped":4270,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.799] SSSSSS
I0330 07:41:25.800] ------------------------------
I0330 07:41:25.800] [sig-apps] CronJob 
I0330 07:41:25.800]   should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
I0330 07:41:25.800]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.800] [BeforeEach] [sig-apps] CronJob
... skipping 28 lines ...
I0330 07:41:25.805] • [SLOW TEST:350.078 seconds]
I0330 07:41:25.805] [sig-apps] CronJob
I0330 07:41:25.805] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:25.805]   should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]
I0330 07:41:25.805]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.805] ------------------------------
I0330 07:41:25.806] {"msg":"PASSED [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow] [Conformance]","total":340,"completed":284,"skipped":4276,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.806] [sig-network] Services 
I0330 07:41:25.806]   should find a service from listing all namespaces [Conformance]
I0330 07:41:25.806]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.806] [BeforeEach] [sig-network] Services
I0330 07:41:25.806]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:25.806] STEP: Creating a kubernetes client
... skipping 14 lines ...
I0330 07:41:25.808] [AfterEach] [sig-network] Services
I0330 07:41:25.809]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.809] Mar 30 07:21:01.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.809] STEP: Destroying namespace "services-9401" for this suite.
I0330 07:41:25.809] [AfterEach] [sig-network] Services
I0330 07:41:25.809]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:750
I0330 07:41:25.809] •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":340,"completed":285,"skipped":4276,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.809] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.810] ------------------------------
I0330 07:41:25.810] [sig-node] Security Context 
I0330 07:41:25.810]   should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
I0330 07:41:25.810]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.810] [BeforeEach] [sig-node] Security Context
... skipping 8 lines ...
I0330 07:41:25.812] I0330 07:21:01.426672      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.812] I0330 07:21:01.426692      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.813] [It] should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]
I0330 07:41:25.813]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.813] I0330 07:21:01.429467      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.813] STEP: Creating a pod to test pod.Spec.SecurityContext.RunAsUser
I0330 07:41:25.814] Mar 30 07:21:01.436: INFO: Waiting up to 5m0s for pod "security-context-fee37e3f-51d4-49ee-bbd2-2b7c1842a992" in namespace "security-context-1474" to be "Succeeded or Failed"
I0330 07:41:25.814] Mar 30 07:21:01.439: INFO: Pod "security-context-fee37e3f-51d4-49ee-bbd2-2b7c1842a992": Phase="Pending", Reason="", readiness=false. Elapsed: 2.644594ms
I0330 07:41:25.814] Mar 30 07:21:03.443: INFO: Pod "security-context-fee37e3f-51d4-49ee-bbd2-2b7c1842a992": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.0071132s
I0330 07:41:25.814] STEP: Saw pod success
I0330 07:41:25.815] Mar 30 07:21:03.443: INFO: Pod "security-context-fee37e3f-51d4-49ee-bbd2-2b7c1842a992" satisfied condition "Succeeded or Failed"
I0330 07:41:25.815] Mar 30 07:21:03.446: INFO: Trying to get logs from node kind-worker2 pod security-context-fee37e3f-51d4-49ee-bbd2-2b7c1842a992 container test-container: <nil>
I0330 07:41:25.815] STEP: delete the pod
I0330 07:41:25.815] Mar 30 07:21:03.471: INFO: Waiting for pod security-context-fee37e3f-51d4-49ee-bbd2-2b7c1842a992 to disappear
I0330 07:41:25.815] Mar 30 07:21:03.474: INFO: Pod security-context-fee37e3f-51d4-49ee-bbd2-2b7c1842a992 no longer exists
I0330 07:41:25.816] [AfterEach] [sig-node] Security Context
I0330 07:41:25.816]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.816] Mar 30 07:21:03.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.816] STEP: Destroying namespace "security-context-1474" for this suite.
I0330 07:41:25.817] •{"msg":"PASSED [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly] [Conformance]","total":340,"completed":286,"skipped":4331,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.817] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.817] ------------------------------
I0330 07:41:25.817] [sig-storage] Projected downwardAPI 
I0330 07:41:25.817]   should provide podname only [NodeConformance] [Conformance]
I0330 07:41:25.817]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.817] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 10 lines ...
I0330 07:41:25.819] [BeforeEach] [sig-storage] Projected downwardAPI
I0330 07:41:25.819]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
I0330 07:41:25.819] I0330 07:21:03.511529      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.819] [It] should provide podname only [NodeConformance] [Conformance]
I0330 07:41:25.820]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.820] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:25.820] Mar 30 07:21:03.517: INFO: Waiting up to 5m0s for pod "downwardapi-volume-be9a7b0b-424c-4800-97e7-2d0992cae706" in namespace "projected-3633" to be "Succeeded or Failed"
I0330 07:41:25.820] Mar 30 07:21:03.520: INFO: Pod "downwardapi-volume-be9a7b0b-424c-4800-97e7-2d0992cae706": Phase="Pending", Reason="", readiness=false. Elapsed: 3.722274ms
I0330 07:41:25.820] Mar 30 07:21:05.527: INFO: Pod "downwardapi-volume-be9a7b0b-424c-4800-97e7-2d0992cae706": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.010620386s
I0330 07:41:25.820] STEP: Saw pod success
I0330 07:41:25.820] Mar 30 07:21:05.527: INFO: Pod "downwardapi-volume-be9a7b0b-424c-4800-97e7-2d0992cae706" satisfied condition "Succeeded or Failed"
I0330 07:41:25.821] Mar 30 07:21:05.530: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-be9a7b0b-424c-4800-97e7-2d0992cae706 container client-container: <nil>
I0330 07:41:25.821] STEP: delete the pod
I0330 07:41:25.821] Mar 30 07:21:05.545: INFO: Waiting for pod downwardapi-volume-be9a7b0b-424c-4800-97e7-2d0992cae706 to disappear
I0330 07:41:25.822] Mar 30 07:21:05.548: INFO: Pod downwardapi-volume-be9a7b0b-424c-4800-97e7-2d0992cae706 no longer exists
I0330 07:41:25.822] [AfterEach] [sig-storage] Projected downwardAPI
I0330 07:41:25.822]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.822] Mar 30 07:21:05.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.822] STEP: Destroying namespace "projected-3633" for this suite.
I0330 07:41:25.823] •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":340,"completed":287,"skipped":4414,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.823] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.823] ------------------------------
I0330 07:41:25.823] [sig-storage] Projected downwardAPI 
I0330 07:41:25.823]   should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.824]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.824] [BeforeEach] [sig-storage] Projected downwardAPI
... skipping 10 lines ...
I0330 07:41:25.825] I0330 07:21:05.588692      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.825] [BeforeEach] [sig-storage] Projected downwardAPI
I0330 07:41:25.826]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/projected_downwardapi.go:41
I0330 07:41:25.826] [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.826]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.826] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:25.826] Mar 30 07:21:05.595: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a455dd7b-7da2-459d-b894-37a9be3e1e12" in namespace "projected-5227" to be "Succeeded or Failed"
I0330 07:41:25.826] Mar 30 07:21:05.599: INFO: Pod "downwardapi-volume-a455dd7b-7da2-459d-b894-37a9be3e1e12": Phase="Pending", Reason="", readiness=false. Elapsed: 4.009055ms
I0330 07:41:25.826] Mar 30 07:21:07.604: INFO: Pod "downwardapi-volume-a455dd7b-7da2-459d-b894-37a9be3e1e12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008897334s
I0330 07:41:25.827] STEP: Saw pod success
I0330 07:41:25.827] Mar 30 07:21:07.604: INFO: Pod "downwardapi-volume-a455dd7b-7da2-459d-b894-37a9be3e1e12" satisfied condition "Succeeded or Failed"
I0330 07:41:25.827] Mar 30 07:21:07.607: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-a455dd7b-7da2-459d-b894-37a9be3e1e12 container client-container: <nil>
I0330 07:41:25.827] STEP: delete the pod
I0330 07:41:25.827] Mar 30 07:21:07.625: INFO: Waiting for pod downwardapi-volume-a455dd7b-7da2-459d-b894-37a9be3e1e12 to disappear
I0330 07:41:25.827] Mar 30 07:21:07.628: INFO: Pod downwardapi-volume-a455dd7b-7da2-459d-b894-37a9be3e1e12 no longer exists
I0330 07:41:25.827] [AfterEach] [sig-storage] Projected downwardAPI
I0330 07:41:25.828]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.828] Mar 30 07:21:07.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.828] STEP: Destroying namespace "projected-5227" for this suite.
I0330 07:41:25.828] •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":288,"skipped":4459,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.828] SSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.828] ------------------------------
I0330 07:41:25.829] [sig-node] Pods 
I0330 07:41:25.829]   should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
I0330 07:41:25.829]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.829] [BeforeEach] [sig-node] Pods
... skipping 19 lines ...
I0330 07:41:25.832] STEP: verifying the pod is in kubernetes
I0330 07:41:25.832] STEP: updating the pod
I0330 07:41:25.832] Mar 30 07:21:10.193: INFO: Successfully updated pod "pod-update-activedeadlineseconds-07e46cdc-22d8-46ed-8ce2-3cdba210f7b2"
I0330 07:41:25.833] Mar 30 07:21:10.193: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-07e46cdc-22d8-46ed-8ce2-3cdba210f7b2" in namespace "pods-2114" to be "terminated due to deadline exceeded"
I0330 07:41:25.833] Mar 30 07:21:10.199: INFO: Pod "pod-update-activedeadlineseconds-07e46cdc-22d8-46ed-8ce2-3cdba210f7b2": Phase="Running", Reason="", readiness=true. Elapsed: 5.593832ms
I0330 07:41:25.833] Mar 30 07:21:12.205: INFO: Pod "pod-update-activedeadlineseconds-07e46cdc-22d8-46ed-8ce2-3cdba210f7b2": Phase="Running", Reason="", readiness=true. Elapsed: 2.012080941s
I0330 07:41:25.833] Mar 30 07:21:14.211: INFO: Pod "pod-update-activedeadlineseconds-07e46cdc-22d8-46ed-8ce2-3cdba210f7b2": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 4.018472823s
I0330 07:41:25.833] Mar 30 07:21:14.211: INFO: Pod "pod-update-activedeadlineseconds-07e46cdc-22d8-46ed-8ce2-3cdba210f7b2" satisfied condition "terminated due to deadline exceeded"
I0330 07:41:25.833] [AfterEach] [sig-node] Pods
I0330 07:41:25.833]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.834] Mar 30 07:21:14.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.834] STEP: Destroying namespace "pods-2114" for this suite.
I0330 07:41:25.834] 
I0330 07:41:25.834] • [SLOW TEST:6.584 seconds]
I0330 07:41:25.834] [sig-node] Pods
I0330 07:41:25.834] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:25.834]   should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
I0330 07:41:25.834]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.834] ------------------------------
I0330 07:41:25.835] {"msg":"PASSED [sig-node] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":340,"completed":289,"skipped":4479,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.835] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.835] ------------------------------
I0330 07:41:25.835] [sig-network] Networking Granular Checks: Pods 
I0330 07:41:25.835]   should function for intra-pod communication: udp [NodeConformance] [Conformance]
I0330 07:41:25.835]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.836] [BeforeEach] [sig-network] Networking
... skipping 45 lines ...
I0330 07:41:25.845] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
I0330 07:41:25.845]   Granular Checks: Pods
I0330 07:41:25.845]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
I0330 07:41:25.845]     should function for intra-pod communication: udp [NodeConformance] [Conformance]
I0330 07:41:25.846]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.846] ------------------------------
I0330 07:41:25.846] {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":340,"completed":290,"skipped":4510,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.846] SSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.847] ------------------------------
I0330 07:41:25.847] [sig-network] Services 
I0330 07:41:25.847]   should be able to change the type from ClusterIP to ExternalName [Conformance]
I0330 07:41:25.847]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.847] [BeforeEach] [sig-network] Services
... skipping 45 lines ...
I0330 07:41:25.855] • [SLOW TEST:9.645 seconds]
I0330 07:41:25.855] [sig-network] Services
I0330 07:41:25.855] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:25.855]   should be able to change the type from ClusterIP to ExternalName [Conformance]
I0330 07:41:25.855]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.855] ------------------------------
I0330 07:41:25.856] {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":340,"completed":291,"skipped":4535,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.856] SSSSSSSSSSSSSS
I0330 07:41:25.856] ------------------------------
I0330 07:41:25.856] [sig-node] Probing container 
I0330 07:41:25.856]   should have monotonically increasing restart count [NodeConformance] [Conformance]
I0330 07:41:25.856]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.856] [BeforeEach] [sig-node] Probing container
... skipping 31 lines ...
I0330 07:41:25.861] • [SLOW TEST:152.500 seconds]
I0330 07:41:25.861] [sig-node] Probing container
I0330 07:41:25.861] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:25.861]   should have monotonically increasing restart count [NodeConformance] [Conformance]
I0330 07:41:25.862]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.862] ------------------------------
I0330 07:41:25.862] {"msg":"PASSED [sig-node] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":340,"completed":292,"skipped":4549,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.862] SSS
I0330 07:41:25.862] ------------------------------
I0330 07:41:25.862] [sig-storage] Secrets 
I0330 07:41:25.863]   should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.863]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.863] [BeforeEach] [sig-storage] Secrets
... skipping 9 lines ...
I0330 07:41:25.865] I0330 07:24:10.661832      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.866] I0330 07:24:10.664885      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.866] [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.866]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.867] STEP: Creating secret with name secret-test-444def35-90d3-4da1-a477-5cf125d041d7
I0330 07:41:25.867] STEP: Creating a pod to test consume secrets
I0330 07:41:25.867] Mar 30 07:24:10.681: INFO: Waiting up to 5m0s for pod "pod-secrets-d44cdc83-d52e-4f97-8c03-8cb2e859281e" in namespace "secrets-5447" to be "Succeeded or Failed"
I0330 07:41:25.867] Mar 30 07:24:10.684: INFO: Pod "pod-secrets-d44cdc83-d52e-4f97-8c03-8cb2e859281e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.960237ms
I0330 07:41:25.867] Mar 30 07:24:12.688: INFO: Pod "pod-secrets-d44cdc83-d52e-4f97-8c03-8cb2e859281e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006606687s
I0330 07:41:25.867] STEP: Saw pod success
I0330 07:41:25.868] Mar 30 07:24:12.688: INFO: Pod "pod-secrets-d44cdc83-d52e-4f97-8c03-8cb2e859281e" satisfied condition "Succeeded or Failed"
I0330 07:41:25.868] Mar 30 07:24:12.690: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-d44cdc83-d52e-4f97-8c03-8cb2e859281e container secret-volume-test: <nil>
I0330 07:41:25.868] STEP: delete the pod
I0330 07:41:25.868] Mar 30 07:24:12.718: INFO: Waiting for pod pod-secrets-d44cdc83-d52e-4f97-8c03-8cb2e859281e to disappear
I0330 07:41:25.868] Mar 30 07:24:12.720: INFO: Pod pod-secrets-d44cdc83-d52e-4f97-8c03-8cb2e859281e no longer exists
I0330 07:41:25.868] [AfterEach] [sig-storage] Secrets
I0330 07:41:25.868]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.868] Mar 30 07:24:12.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.869] STEP: Destroying namespace "secrets-5447" for this suite.
I0330 07:41:25.869] •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":293,"skipped":4552,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.869] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.869] ------------------------------
I0330 07:41:25.869] [sig-storage] EmptyDir volumes 
I0330 07:41:25.869]   should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.870]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.870] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:25.871] I0330 07:24:12.762461      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.871] I0330 07:24:12.762486      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.871] [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:25.871]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.872] I0330 07:24:12.765204      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.872] STEP: Creating a pod to test emptydir 0777 on tmpfs
I0330 07:41:25.872] Mar 30 07:24:12.771: INFO: Waiting up to 5m0s for pod "pod-d750414e-01aa-48fd-9dce-4e752fe01433" in namespace "emptydir-9916" to be "Succeeded or Failed"
I0330 07:41:25.872] Mar 30 07:24:12.774: INFO: Pod "pod-d750414e-01aa-48fd-9dce-4e752fe01433": Phase="Pending", Reason="", readiness=false. Elapsed: 2.956291ms
I0330 07:41:25.872] Mar 30 07:24:14.778: INFO: Pod "pod-d750414e-01aa-48fd-9dce-4e752fe01433": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.006863977s
I0330 07:41:25.872] STEP: Saw pod success
I0330 07:41:25.872] Mar 30 07:24:14.778: INFO: Pod "pod-d750414e-01aa-48fd-9dce-4e752fe01433" satisfied condition "Succeeded or Failed"
I0330 07:41:25.873] Mar 30 07:24:14.781: INFO: Trying to get logs from node kind-worker2 pod pod-d750414e-01aa-48fd-9dce-4e752fe01433 container test-container: <nil>
I0330 07:41:25.873] STEP: delete the pod
I0330 07:41:25.873] Mar 30 07:24:14.799: INFO: Waiting for pod pod-d750414e-01aa-48fd-9dce-4e752fe01433 to disappear
I0330 07:41:25.873] Mar 30 07:24:14.802: INFO: Pod pod-d750414e-01aa-48fd-9dce-4e752fe01433 no longer exists
I0330 07:41:25.873] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:25.873]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.873] Mar 30 07:24:14.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.873] STEP: Destroying namespace "emptydir-9916" for this suite.
I0330 07:41:25.874] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":294,"skipped":4594,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.874] 
I0330 07:41:25.874] ------------------------------
I0330 07:41:25.874] [sig-node] Downward API 
I0330 07:41:25.874]   should provide host IP as an env var [NodeConformance] [Conformance]
I0330 07:41:25.874]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.874] [BeforeEach] [sig-node] Downward API
... skipping 8 lines ...
I0330 07:41:25.875] I0330 07:24:14.841273      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.876] I0330 07:24:14.841297      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.876] [It] should provide host IP as an env var [NodeConformance] [Conformance]
I0330 07:41:25.876]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.876] I0330 07:24:14.846115      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.876] STEP: Creating a pod to test downward api env vars
I0330 07:41:25.876] Mar 30 07:24:14.854: INFO: Waiting up to 5m0s for pod "downward-api-386c48dc-b211-40b2-8540-0714c3a94564" in namespace "downward-api-3575" to be "Succeeded or Failed"
I0330 07:41:25.876] Mar 30 07:24:14.858: INFO: Pod "downward-api-386c48dc-b211-40b2-8540-0714c3a94564": Phase="Pending", Reason="", readiness=false. Elapsed: 3.27408ms
I0330 07:41:25.877] Mar 30 07:24:16.862: INFO: Pod "downward-api-386c48dc-b211-40b2-8540-0714c3a94564": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008155917s
I0330 07:41:25.877] STEP: Saw pod success
I0330 07:41:25.877] Mar 30 07:24:16.862: INFO: Pod "downward-api-386c48dc-b211-40b2-8540-0714c3a94564" satisfied condition "Succeeded or Failed"
I0330 07:41:25.877] Mar 30 07:24:16.866: INFO: Trying to get logs from node kind-worker2 pod downward-api-386c48dc-b211-40b2-8540-0714c3a94564 container dapi-container: <nil>
I0330 07:41:25.877] STEP: delete the pod
I0330 07:41:25.877] Mar 30 07:24:16.882: INFO: Waiting for pod downward-api-386c48dc-b211-40b2-8540-0714c3a94564 to disappear
I0330 07:41:25.877] Mar 30 07:24:16.884: INFO: Pod downward-api-386c48dc-b211-40b2-8540-0714c3a94564 no longer exists
I0330 07:41:25.877] [AfterEach] [sig-node] Downward API
I0330 07:41:25.878]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.878] Mar 30 07:24:16.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.878] STEP: Destroying namespace "downward-api-3575" for this suite.
I0330 07:41:25.878] •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":340,"completed":295,"skipped":4594,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.878] SSSSSSSSSSSSSSSS
I0330 07:41:25.878] ------------------------------
I0330 07:41:25.878] [sig-storage] Subpath Atomic writer volumes 
I0330 07:41:25.879]   should support subpaths with configmap pod [LinuxOnly] [Conformance]
I0330 07:41:25.879]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.879] [BeforeEach] [sig-storage] Subpath
... skipping 12 lines ...
I0330 07:41:25.881] I0330 07:24:16.925174      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.881] STEP: Setting up data
I0330 07:41:25.881] [It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
I0330 07:41:25.881]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.881] STEP: Creating pod pod-subpath-test-configmap-m6sx
I0330 07:41:25.881] STEP: Creating a pod to test atomic-volume-subpath
I0330 07:41:25.882] Mar 30 07:24:16.939: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-m6sx" in namespace "subpath-8461" to be "Succeeded or Failed"
I0330 07:41:25.882] Mar 30 07:24:16.942: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Pending", Reason="", readiness=false. Elapsed: 3.323603ms
I0330 07:41:25.882] Mar 30 07:24:18.947: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Running", Reason="", readiness=true. Elapsed: 2.007707977s
I0330 07:41:25.882] Mar 30 07:24:20.951: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Running", Reason="", readiness=true. Elapsed: 4.012271602s
I0330 07:41:25.882] Mar 30 07:24:22.956: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Running", Reason="", readiness=true. Elapsed: 6.01682482s
I0330 07:41:25.882] Mar 30 07:24:24.961: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Running", Reason="", readiness=true. Elapsed: 8.021714701s
I0330 07:41:25.883] Mar 30 07:24:26.967: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Running", Reason="", readiness=true. Elapsed: 10.027636422s
I0330 07:41:25.883] Mar 30 07:24:28.971: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Running", Reason="", readiness=true. Elapsed: 12.032120055s
I0330 07:41:25.883] Mar 30 07:24:30.976: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Running", Reason="", readiness=true. Elapsed: 14.036559749s
I0330 07:41:25.883] Mar 30 07:24:32.981: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Running", Reason="", readiness=true. Elapsed: 16.041806389s
I0330 07:41:25.883] Mar 30 07:24:34.986: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Running", Reason="", readiness=true. Elapsed: 18.047153807s
I0330 07:41:25.884] Mar 30 07:24:36.990: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Running", Reason="", readiness=true. Elapsed: 20.051298929s
I0330 07:41:25.884] Mar 30 07:24:38.996: INFO: Pod "pod-subpath-test-configmap-m6sx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 22.056500609s
I0330 07:41:25.884] STEP: Saw pod success
I0330 07:41:25.884] Mar 30 07:24:38.996: INFO: Pod "pod-subpath-test-configmap-m6sx" satisfied condition "Succeeded or Failed"
I0330 07:41:25.885] Mar 30 07:24:38.999: INFO: Trying to get logs from node kind-worker2 pod pod-subpath-test-configmap-m6sx container test-container-subpath-configmap-m6sx: <nil>
I0330 07:41:25.885] STEP: delete the pod
I0330 07:41:25.885] Mar 30 07:24:39.015: INFO: Waiting for pod pod-subpath-test-configmap-m6sx to disappear
I0330 07:41:25.885] Mar 30 07:24:39.018: INFO: Pod pod-subpath-test-configmap-m6sx no longer exists
I0330 07:41:25.885] STEP: Deleting pod pod-subpath-test-configmap-m6sx
I0330 07:41:25.886] Mar 30 07:24:39.018: INFO: Deleting pod "pod-subpath-test-configmap-m6sx" in namespace "subpath-8461"
... skipping 7 lines ...
I0330 07:41:25.886] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
I0330 07:41:25.886]   Atomic writer volumes
I0330 07:41:25.887]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
I0330 07:41:25.887]     should support subpaths with configmap pod [LinuxOnly] [Conformance]
I0330 07:41:25.887]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.887] ------------------------------
I0330 07:41:25.887] {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":340,"completed":296,"skipped":4610,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.887] SSSSSSSSSS
I0330 07:41:25.887] ------------------------------
I0330 07:41:25.888] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:25.888]   should mutate pod and apply defaults after mutation [Conformance]
I0330 07:41:25.888]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.888] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 26 lines ...
I0330 07:41:25.892]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.892] Mar 30 07:24:42.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.892] STEP: Destroying namespace "webhook-6096" for this suite.
I0330 07:41:25.892] STEP: Destroying namespace "webhook-6096-markers" for this suite.
I0330 07:41:25.892] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:25.892]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102
I0330 07:41:25.893] •{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":340,"completed":297,"skipped":4620,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.893] SSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.893] ------------------------------
I0330 07:41:25.893] [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook 
I0330 07:41:25.893]   should execute poststart http hook properly [NodeConformance] [Conformance]
I0330 07:41:25.894]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.894] [BeforeEach] [sig-node] Container Lifecycle Hook
... skipping 36 lines ...
I0330 07:41:25.902] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:25.902]   when create a pod with lifecycle hook
I0330 07:41:25.902]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/lifecycle_hook.go:43
I0330 07:41:25.902]     should execute poststart http hook properly [NodeConformance] [Conformance]
I0330 07:41:25.902]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.903] ------------------------------
I0330 07:41:25.903] {"msg":"PASSED [sig-node] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":340,"completed":298,"skipped":4641,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.903] [sig-network] Services 
I0330 07:41:25.904]   should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
I0330 07:41:25.904]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.904] [BeforeEach] [sig-network] Services
I0330 07:41:25.904]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:25.904] STEP: Creating a kubernetes client
... skipping 70 lines ...
I0330 07:41:25.916] • [SLOW TEST:22.475 seconds]
I0330 07:41:25.916] [sig-network] Services
I0330 07:41:25.917] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:25.917]   should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]
I0330 07:41:25.917]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.917] ------------------------------
I0330 07:41:25.917] {"msg":"PASSED [sig-network] Services should have session affinity work for service with type clusterIP [LinuxOnly] [Conformance]","total":340,"completed":299,"skipped":4641,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.917] SSSSSSSSSSS
I0330 07:41:25.917] ------------------------------
I0330 07:41:25.917] [sig-storage] Downward API volume 
I0330 07:41:25.918]   should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
I0330 07:41:25.918]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.918] [BeforeEach] [sig-storage] Downward API volume
... skipping 10 lines ...
I0330 07:41:25.919] [BeforeEach] [sig-storage] Downward API volume
I0330 07:41:25.920]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
I0330 07:41:25.920] I0330 07:25:13.350260      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.920] [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
I0330 07:41:25.920]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.920] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:25.920] Mar 30 07:25:13.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-40bb33f3-c3c7-4d0c-95d5-0b65bdc733ad" in namespace "downward-api-677" to be "Succeeded or Failed"
I0330 07:41:25.920] Mar 30 07:25:13.364: INFO: Pod "downwardapi-volume-40bb33f3-c3c7-4d0c-95d5-0b65bdc733ad": Phase="Pending", Reason="", readiness=false. Elapsed: 7.32592ms
I0330 07:41:25.921] Mar 30 07:25:15.371: INFO: Pod "downwardapi-volume-40bb33f3-c3c7-4d0c-95d5-0b65bdc733ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013716593s
I0330 07:41:25.921] STEP: Saw pod success
I0330 07:41:25.921] Mar 30 07:25:15.371: INFO: Pod "downwardapi-volume-40bb33f3-c3c7-4d0c-95d5-0b65bdc733ad" satisfied condition "Succeeded or Failed"
I0330 07:41:25.921] Mar 30 07:25:15.373: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-40bb33f3-c3c7-4d0c-95d5-0b65bdc733ad container client-container: <nil>
I0330 07:41:25.921] STEP: delete the pod
I0330 07:41:25.921] Mar 30 07:25:15.395: INFO: Waiting for pod downwardapi-volume-40bb33f3-c3c7-4d0c-95d5-0b65bdc733ad to disappear
I0330 07:41:25.921] Mar 30 07:25:15.398: INFO: Pod downwardapi-volume-40bb33f3-c3c7-4d0c-95d5-0b65bdc733ad no longer exists
I0330 07:41:25.921] [AfterEach] [sig-storage] Downward API volume
I0330 07:41:25.922]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.922] Mar 30 07:25:15.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.922] STEP: Destroying namespace "downward-api-677" for this suite.
I0330 07:41:25.922] •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":340,"completed":300,"skipped":4652,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.922] SS
I0330 07:41:25.922] ------------------------------
I0330 07:41:25.922] [sig-apps] Deployment 
I0330 07:41:25.922]   should run the lifecycle of a Deployment [Conformance]
I0330 07:41:25.923]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.923] [BeforeEach] [sig-apps] Deployment
... skipping 118 lines ...
I0330 07:41:25.947] I0330 07:25:19.588362      19 retrywatcher.go:275] Stopping RetryWatcher.
I0330 07:41:25.947] Mar 30 07:25:19.591: INFO: Log out all the ReplicaSets if there is no deployment created
I0330 07:41:25.947] [AfterEach] [sig-apps] Deployment
I0330 07:41:25.948]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.948] Mar 30 07:25:19.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.948] STEP: Destroying namespace "deployment-9199" for this suite.
I0330 07:41:25.948] •{"msg":"PASSED [sig-apps] Deployment should run the lifecycle of a Deployment [Conformance]","total":340,"completed":301,"skipped":4654,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.948] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.949] ------------------------------
I0330 07:41:25.949] [sig-apps] CronJob 
I0330 07:41:25.949]   should not schedule jobs when suspended [Slow] [Conformance]
I0330 07:41:25.950]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.950] [BeforeEach] [sig-apps] CronJob
... skipping 26 lines ...
I0330 07:41:25.954] • [SLOW TEST:300.077 seconds]
I0330 07:41:25.954] [sig-apps] CronJob
I0330 07:41:25.954] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:25.954]   should not schedule jobs when suspended [Slow] [Conformance]
I0330 07:41:25.954]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.954] ------------------------------
I0330 07:41:25.955] {"msg":"PASSED [sig-apps] CronJob should not schedule jobs when suspended [Slow] [Conformance]","total":340,"completed":302,"skipped":4728,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.955] SSSSSSS
I0330 07:41:25.955] ------------------------------
I0330 07:41:25.955] [sig-apps] Daemon set [Serial] 
I0330 07:41:25.955]   should run and stop complex daemon [Conformance]
I0330 07:41:25.955]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.955] [BeforeEach] [sig-apps] Daemon set [Serial]
... skipping 79 lines ...
I0330 07:41:25.965] • [SLOW TEST:18.534 seconds]
I0330 07:41:25.965] [sig-apps] Daemon set [Serial]
I0330 07:41:25.965] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:25.966]   should run and stop complex daemon [Conformance]
I0330 07:41:25.966]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.966] ------------------------------
I0330 07:41:25.966] {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":340,"completed":303,"skipped":4735,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.966] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.966] ------------------------------
I0330 07:41:25.966] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:25.966]   should be able to deny custom resource creation, update and deletion [Conformance]
I0330 07:41:25.967]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.967] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 39 lines ...
I0330 07:41:25.971] • [SLOW TEST:6.702 seconds]
I0330 07:41:25.972] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:25.972] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.972]   should be able to deny custom resource creation, update and deletion [Conformance]
I0330 07:41:25.972]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.972] ------------------------------
I0330 07:41:25.972] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":340,"completed":304,"skipped":4778,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.972] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.973] ------------------------------
I0330 07:41:25.973] [sig-auth] Certificates API [Privileged:ClusterAdmin] 
I0330 07:41:25.973]   should support CSR API operations [Conformance]
I0330 07:41:25.973]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.973] [BeforeEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
... skipping 31 lines ...
I0330 07:41:25.977] STEP: deleting
I0330 07:41:25.977] STEP: deleting a collection
I0330 07:41:25.978] [AfterEach] [sig-auth] Certificates API [Privileged:ClusterAdmin]
I0330 07:41:25.978]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.978] Mar 30 07:30:46.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.978] STEP: Destroying namespace "certificates-8345" for this suite.
I0330 07:41:25.979] •{"msg":"PASSED [sig-auth] Certificates API [Privileged:ClusterAdmin] should support CSR API operations [Conformance]","total":340,"completed":305,"skipped":4843,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.979] S
I0330 07:41:25.979] ------------------------------
I0330 07:41:25.980] [sig-node] Variable Expansion 
I0330 07:41:25.980]   should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
I0330 07:41:25.980]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.980] [BeforeEach] [sig-node] Variable Expansion
I0330 07:41:25.981]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:25.981] STEP: Creating a kubernetes client
I0330 07:41:25.981] Mar 30 07:30:46.239: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:25.981] STEP: Building a namespace api object, basename var-expansion
I0330 07:41:25.982] I0330 07:30:46.248194      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.982] I0330 07:30:46.248231      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.982] I0330 07:30:46.263717      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.983] STEP: Waiting for a default service account to be provisioned in namespace
I0330 07:41:25.983] I0330 07:30:46.264200      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.983] I0330 07:30:46.264285      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.984] I0330 07:30:46.266979      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:25.984] [It] should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
I0330 07:41:25.984]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.984] Mar 30 07:30:48.282: INFO: Deleting pod "var-expansion-b460f5c8-fd6a-4b74-9551-41e716d1dc57" in namespace "var-expansion-5636"
I0330 07:41:25.984] Mar 30 07:30:48.288: INFO: Wait up to 5m0s for pod "var-expansion-b460f5c8-fd6a-4b74-9551-41e716d1dc57" to be fully deleted
I0330 07:41:25.985] [AfterEach] [sig-node] Variable Expansion
I0330 07:41:25.985]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:25.985] Mar 30 07:30:54.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:25.985] STEP: Destroying namespace "var-expansion-5636" for this suite.
I0330 07:41:25.985] 
I0330 07:41:25.985] • [SLOW TEST:8.070 seconds]
I0330 07:41:25.985] [sig-node] Variable Expansion
I0330 07:41:25.986] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:25.986]   should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]
I0330 07:41:25.986]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.986] ------------------------------
I0330 07:41:25.987] {"msg":"PASSED [sig-node] Variable Expansion should fail substituting values in a volume subpath with absolute path [Slow] [Conformance]","total":340,"completed":306,"skipped":4844,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.987] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:25.987] ------------------------------
I0330 07:41:25.987] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:25.987]   should be able to convert a non homogeneous list of CRs [Conformance]
I0330 07:41:25.988]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.988] [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
... skipping 35 lines ...
I0330 07:41:25.994] • [SLOW TEST:6.839 seconds]
I0330 07:41:25.995] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
I0330 07:41:25.995] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:25.995]   should be able to convert a non homogeneous list of CRs [Conformance]
I0330 07:41:25.995]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.995] ------------------------------
I0330 07:41:25.995] {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":340,"completed":307,"skipped":4896,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:25.996] SSS
I0330 07:41:25.996] ------------------------------
I0330 07:41:25.996] [sig-scheduling] LimitRange 
I0330 07:41:25.996]   should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
I0330 07:41:25.996]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:25.996] [BeforeEach] [sig-scheduling] LimitRange
... skipping 47 lines ...
I0330 07:41:26.005] • [SLOW TEST:7.151 seconds]
I0330 07:41:26.005] [sig-scheduling] LimitRange
I0330 07:41:26.005] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
I0330 07:41:26.005]   should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
I0330 07:41:26.006]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.006] ------------------------------
I0330 07:41:26.006] {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":340,"completed":308,"skipped":4899,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.006] SSSSSSSSSSS
I0330 07:41:26.006] ------------------------------
I0330 07:41:26.007] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
I0330 07:41:26.007]   works for multiple CRDs of same group but different versions [Conformance]
I0330 07:41:26.007]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.007] [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
... skipping 23 lines ...
I0330 07:41:26.010] • [SLOW TEST:29.702 seconds]
I0330 07:41:26.010] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
I0330 07:41:26.011] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:26.011]   works for multiple CRDs of same group but different versions [Conformance]
I0330 07:41:26.011]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.011] ------------------------------
I0330 07:41:26.011] {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":340,"completed":309,"skipped":4910,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.011] SSSSS
I0330 07:41:26.011] ------------------------------
I0330 07:41:26.012] [sig-storage] Downward API volume 
I0330 07:41:26.012]   should update labels on modification [NodeConformance] [Conformance]
I0330 07:41:26.012]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.012] [BeforeEach] [sig-storage] Downward API volume
... skipping 24 lines ...
I0330 07:41:26.016] • [SLOW TEST:6.613 seconds]
I0330 07:41:26.016] [sig-storage] Downward API volume
I0330 07:41:26.016] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/framework.go:23
I0330 07:41:26.016]   should update labels on modification [NodeConformance] [Conformance]
I0330 07:41:26.017]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.017] ------------------------------
I0330 07:41:26.017] {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":340,"completed":310,"skipped":4915,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.017] S
I0330 07:41:26.017] ------------------------------
I0330 07:41:26.018] [sig-storage] Projected secret 
I0330 07:41:26.018]   optional updates should be reflected in volume [NodeConformance] [Conformance]
I0330 07:41:26.018]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.019] [BeforeEach] [sig-storage] Projected secret
... skipping 20 lines ...
I0330 07:41:26.023] STEP: Creating secret with name s-test-opt-create-c9b1f4b5-848b-4f05-aadb-171ca3634b93
I0330 07:41:26.023] STEP: waiting to observe update in volume
I0330 07:41:26.023] [AfterEach] [sig-storage] Projected secret
I0330 07:41:26.023]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.023] Mar 30 07:31:48.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.024] STEP: Destroying namespace "projected-9916" for this suite.
I0330 07:41:26.024] •{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":340,"completed":311,"skipped":4916,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.024] SSSSSSSSSSSSSS
I0330 07:41:26.024] ------------------------------
I0330 07:41:26.024] [sig-storage] Projected secret 
I0330 07:41:26.024]   should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0330 07:41:26.024]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.024] [BeforeEach] [sig-storage] Projected secret
... skipping 9 lines ...
I0330 07:41:26.026] I0330 07:31:48.761280      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.026] [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0330 07:41:26.027]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.027] I0330 07:31:48.765283      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.027] STEP: Creating projection with secret that has name projected-secret-test-map-de1a6473-fa3c-45f0-a97d-051ffeef46af
I0330 07:41:26.027] STEP: Creating a pod to test consume secrets
I0330 07:41:26.027] Mar 30 07:31:48.776: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-48c46d64-12c6-440f-b312-ff21fe966c09" in namespace "projected-2043" to be "Succeeded or Failed"
I0330 07:41:26.028] Mar 30 07:31:48.779: INFO: Pod "pod-projected-secrets-48c46d64-12c6-440f-b312-ff21fe966c09": Phase="Pending", Reason="", readiness=false. Elapsed: 3.084839ms
I0330 07:41:26.028] Mar 30 07:31:50.784: INFO: Pod "pod-projected-secrets-48c46d64-12c6-440f-b312-ff21fe966c09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008547827s
I0330 07:41:26.028] STEP: Saw pod success
I0330 07:41:26.028] Mar 30 07:31:50.785: INFO: Pod "pod-projected-secrets-48c46d64-12c6-440f-b312-ff21fe966c09" satisfied condition "Succeeded or Failed"
I0330 07:41:26.028] Mar 30 07:31:50.787: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-48c46d64-12c6-440f-b312-ff21fe966c09 container projected-secret-volume-test: <nil>
I0330 07:41:26.028] STEP: delete the pod
I0330 07:41:26.028] Mar 30 07:31:50.814: INFO: Waiting for pod pod-projected-secrets-48c46d64-12c6-440f-b312-ff21fe966c09 to disappear
I0330 07:41:26.029] Mar 30 07:31:50.816: INFO: Pod pod-projected-secrets-48c46d64-12c6-440f-b312-ff21fe966c09 no longer exists
I0330 07:41:26.029] [AfterEach] [sig-storage] Projected secret
I0330 07:41:26.029]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.029] Mar 30 07:31:50.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.029] STEP: Destroying namespace "projected-2043" for this suite.
I0330 07:41:26.030] •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":340,"completed":312,"skipped":4930,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.030] 
I0330 07:41:26.030] ------------------------------
I0330 07:41:26.030] [sig-network] Networking Granular Checks: Pods 
I0330 07:41:26.030]   should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.030]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.030] [BeforeEach] [sig-network] Networking
... skipping 48 lines ...
I0330 07:41:26.037] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/network/framework.go:23
I0330 07:41:26.037]   Granular Checks: Pods
I0330 07:41:26.038]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/network/networking.go:30
I0330 07:41:26.038]     should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.038]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.038] ------------------------------
I0330 07:41:26.038] {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":313,"skipped":4930,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.038] SSSS
I0330 07:41:26.038] ------------------------------
I0330 07:41:26.038] [sig-node] Variable Expansion 
I0330 07:41:26.039]   should allow substituting values in a volume subpath [Conformance]
I0330 07:41:26.039]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.039] [BeforeEach] [sig-node] Variable Expansion
... skipping 8 lines ...
I0330 07:41:26.040] I0330 07:32:15.094183      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.040] I0330 07:32:15.094209      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.040] [It] should allow substituting values in a volume subpath [Conformance]
I0330 07:41:26.041]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.041] I0330 07:32:15.096688      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.041] STEP: Creating a pod to test substitution in volume subpath
I0330 07:41:26.041] Mar 30 07:32:15.102: INFO: Waiting up to 5m0s for pod "var-expansion-f7fa525f-bc0d-44fd-9359-ca14ffcbffd6" in namespace "var-expansion-8481" to be "Succeeded or Failed"
I0330 07:41:26.041] Mar 30 07:32:15.104: INFO: Pod "var-expansion-f7fa525f-bc0d-44fd-9359-ca14ffcbffd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149898ms
I0330 07:41:26.041] Mar 30 07:32:17.111: INFO: Pod "var-expansion-f7fa525f-bc0d-44fd-9359-ca14ffcbffd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.00963317s
I0330 07:41:26.042] STEP: Saw pod success
I0330 07:41:26.042] Mar 30 07:32:17.111: INFO: Pod "var-expansion-f7fa525f-bc0d-44fd-9359-ca14ffcbffd6" satisfied condition "Succeeded or Failed"
I0330 07:41:26.042] Mar 30 07:32:17.114: INFO: Trying to get logs from node kind-worker2 pod var-expansion-f7fa525f-bc0d-44fd-9359-ca14ffcbffd6 container dapi-container: <nil>
I0330 07:41:26.042] STEP: delete the pod
I0330 07:41:26.042] Mar 30 07:32:17.131: INFO: Waiting for pod var-expansion-f7fa525f-bc0d-44fd-9359-ca14ffcbffd6 to disappear
I0330 07:41:26.042] Mar 30 07:32:17.133: INFO: Pod var-expansion-f7fa525f-bc0d-44fd-9359-ca14ffcbffd6 no longer exists
I0330 07:41:26.042] [AfterEach] [sig-node] Variable Expansion
I0330 07:41:26.042]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.043] Mar 30 07:32:17.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.043] STEP: Destroying namespace "var-expansion-8481" for this suite.
I0330 07:41:26.043] •{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a volume subpath [Conformance]","total":340,"completed":314,"skipped":4934,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.043] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:26.043] ------------------------------
I0330 07:41:26.043] [sig-node] Kubelet when scheduling a busybox command that always fails in a pod 
I0330 07:41:26.043]   should be possible to delete [NodeConformance] [Conformance]
I0330 07:41:26.044]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.044] [BeforeEach] [sig-node] Kubelet
... skipping 15 lines ...
I0330 07:41:26.046] [It] should be possible to delete [NodeConformance] [Conformance]
I0330 07:41:26.046]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.046] [AfterEach] [sig-node] Kubelet
I0330 07:41:26.047]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.047] Mar 30 07:32:17.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.047] STEP: Destroying namespace "kubelet-test-7536" for this suite.
I0330 07:41:26.047] •{"msg":"PASSED [sig-node] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":340,"completed":315,"skipped":4981,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.047] SSS
I0330 07:41:26.047] ------------------------------
I0330 07:41:26.047] [sig-api-machinery] Aggregator 
I0330 07:41:26.048]   Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
I0330 07:41:26.048]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.048] [BeforeEach] [sig-api-machinery] Aggregator
... skipping 32 lines ...
I0330 07:41:26.054] • [SLOW TEST:8.156 seconds]
I0330 07:41:26.054] [sig-api-machinery] Aggregator
I0330 07:41:26.054] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:26.055]   Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
I0330 07:41:26.055]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.055] ------------------------------
I0330 07:41:26.056] {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":340,"completed":316,"skipped":4984,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.056] SSSSSSSSSS
I0330 07:41:26.056] ------------------------------
I0330 07:41:26.057] [sig-storage] Projected configMap 
I0330 07:41:26.057]   should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.057]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.057] [BeforeEach] [sig-storage] Projected configMap
... skipping 9 lines ...
I0330 07:41:26.059] I0330 07:32:25.380774      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.060] I0330 07:32:25.384374      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.060] [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.060]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.060] STEP: Creating configMap with name projected-configmap-test-volume-map-c7de3584-6dca-4e3b-a82c-06ae6e98ad61
I0330 07:41:26.060] STEP: Creating a pod to test consume configMaps
I0330 07:41:26.061] Mar 30 07:32:25.394: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8126c49f-c772-4532-ba3e-1271cb819bc1" in namespace "projected-7697" to be "Succeeded or Failed"
I0330 07:41:26.061] Mar 30 07:32:25.408: INFO: Pod "pod-projected-configmaps-8126c49f-c772-4532-ba3e-1271cb819bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.623362ms
I0330 07:41:26.061] Mar 30 07:32:27.415: INFO: Pod "pod-projected-configmaps-8126c49f-c772-4532-ba3e-1271cb819bc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.021377066s
I0330 07:41:26.061] STEP: Saw pod success
I0330 07:41:26.061] Mar 30 07:32:27.415: INFO: Pod "pod-projected-configmaps-8126c49f-c772-4532-ba3e-1271cb819bc1" satisfied condition "Succeeded or Failed"
I0330 07:41:26.062] Mar 30 07:32:27.418: INFO: Trying to get logs from node kind-worker2 pod pod-projected-configmaps-8126c49f-c772-4532-ba3e-1271cb819bc1 container agnhost-container: <nil>
I0330 07:41:26.062] STEP: delete the pod
I0330 07:41:26.062] Mar 30 07:32:27.432: INFO: Waiting for pod pod-projected-configmaps-8126c49f-c772-4532-ba3e-1271cb819bc1 to disappear
I0330 07:41:26.062] Mar 30 07:32:27.435: INFO: Pod pod-projected-configmaps-8126c49f-c772-4532-ba3e-1271cb819bc1 no longer exists
I0330 07:41:26.062] [AfterEach] [sig-storage] Projected configMap
I0330 07:41:26.062]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.062] Mar 30 07:32:27.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.063] STEP: Destroying namespace "projected-7697" for this suite.
I0330 07:41:26.063] •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":317,"skipped":4994,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.063] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:26.063] ------------------------------
I0330 07:41:26.063] [sig-node] Container Runtime blackbox test on terminated container 
I0330 07:41:26.063]   should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
I0330 07:41:26.064]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.064] [BeforeEach] [sig-node] Container Runtime
... skipping 18 lines ...
I0330 07:41:26.066] Mar 30 07:32:28.481: INFO: Expected: &{OK} to match Container's Termination Message: OK --
I0330 07:41:26.066] STEP: delete the container
I0330 07:41:26.066] [AfterEach] [sig-node] Container Runtime
I0330 07:41:26.067]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.067] Mar 30 07:32:28.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.067] STEP: Destroying namespace "container-runtime-4053" for this suite.
I0330 07:41:26.067] •{"msg":"PASSED [sig-node] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":340,"completed":318,"skipped":5045,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.068] 
I0330 07:41:26.068] ------------------------------
I0330 07:41:26.068] [sig-apps] Deployment 
I0330 07:41:26.068]   deployment should delete old replica sets [Conformance]
I0330 07:41:26.068]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.068] [BeforeEach] [sig-apps] Deployment
... skipping 10 lines ...
I0330 07:41:26.070] [BeforeEach] [sig-apps] Deployment
I0330 07:41:26.070]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:86
I0330 07:41:26.070] [It] deployment should delete old replica sets [Conformance]
I0330 07:41:26.070]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.070] I0330 07:32:28.523938      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.070] Mar 30 07:32:28.530: INFO: Pod name cleanup-pod: Found 0 pods out of 1
I0330 07:41:26.070] I0330 07:32:32.242051      19 retrywatcher.go:151] "Failed to get event! Re-creating the watcher." resourceVersion="28569"
I0330 07:41:26.070] I0330 07:32:32.242082      19 retrywatcher.go:273] Restarting RetryWatcher at RV="28569"
I0330 07:41:26.071] Mar 30 07:32:33.541: INFO: Pod name cleanup-pod: Found 1 pods out of 1
I0330 07:41:26.071] STEP: ensuring each pod is running
I0330 07:41:26.071] Mar 30 07:32:33.541: INFO: Creating deployment test-cleanup-deployment
I0330 07:41:26.071] STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
I0330 07:41:26.071] [AfterEach] [sig-apps] Deployment
... skipping 17 lines ...
I0330 07:41:26.091] • [SLOW TEST:5.101 seconds]
I0330 07:41:26.091] [sig-apps] Deployment
I0330 07:41:26.091] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:26.091]   deployment should delete old replica sets [Conformance]
I0330 07:41:26.091]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.091] ------------------------------
I0330 07:41:26.091] {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":340,"completed":319,"skipped":5045,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.091] SSSSSSSSSSSSSSSSS
I0330 07:41:26.092] ------------------------------
I0330 07:41:26.092] [sig-storage] Projected secret 
I0330 07:41:26.092]   should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.092]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.092] [BeforeEach] [sig-storage] Projected secret
... skipping 9 lines ...
I0330 07:41:26.094] I0330 07:32:33.625666      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.094] [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.094]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.095] I0330 07:32:33.628705      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.095] STEP: Creating projection with secret that has name projected-secret-test-eac49aab-2d1c-45c3-8a1b-549a1889bc6a
I0330 07:41:26.095] STEP: Creating a pod to test consume secrets
I0330 07:41:26.095] Mar 30 07:32:33.637: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c662d197-3315-46fb-8eff-ec9203ac1d66" in namespace "projected-3160" to be "Succeeded or Failed"
I0330 07:41:26.096] Mar 30 07:32:33.640: INFO: Pod "pod-projected-secrets-c662d197-3315-46fb-8eff-ec9203ac1d66": Phase="Pending", Reason="", readiness=false. Elapsed: 3.850169ms
I0330 07:41:26.096] Mar 30 07:32:35.649: INFO: Pod "pod-projected-secrets-c662d197-3315-46fb-8eff-ec9203ac1d66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012241153s
I0330 07:41:26.096] STEP: Saw pod success
I0330 07:41:26.096] Mar 30 07:32:35.649: INFO: Pod "pod-projected-secrets-c662d197-3315-46fb-8eff-ec9203ac1d66" satisfied condition "Succeeded or Failed"
I0330 07:41:26.096] Mar 30 07:32:35.652: INFO: Trying to get logs from node kind-worker pod pod-projected-secrets-c662d197-3315-46fb-8eff-ec9203ac1d66 container projected-secret-volume-test: <nil>
I0330 07:41:26.096] STEP: delete the pod
I0330 07:41:26.096] Mar 30 07:32:35.667: INFO: Waiting for pod pod-projected-secrets-c662d197-3315-46fb-8eff-ec9203ac1d66 to disappear
I0330 07:41:26.097] Mar 30 07:32:35.671: INFO: Pod pod-projected-secrets-c662d197-3315-46fb-8eff-ec9203ac1d66 no longer exists
I0330 07:41:26.097] [AfterEach] [sig-storage] Projected secret
I0330 07:41:26.097]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.097] Mar 30 07:32:35.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.097] STEP: Destroying namespace "projected-3160" for this suite.
I0330 07:41:26.098] •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":320,"skipped":5062,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.098] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:26.098] ------------------------------
I0330 07:41:26.098] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
I0330 07:41:26.098]   should honor timeout [Conformance]
I0330 07:41:26.098]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.098] [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
... skipping 20 lines ...
I0330 07:41:26.101] Mar 30 07:32:39.327: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
I0330 07:41:26.102] [It] should honor timeout [Conformance]
I0330 07:41:26.102]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.102] STEP: Setting timeout (1s) shorter than webhook latency (5s)
I0330 07:41:26.102] STEP: Registering slow webhook via the AdmissionRegistration API
I0330 07:41:26.102] STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
I0330 07:41:26.102] STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
I0330 07:41:26.102] STEP: Registering slow webhook via the AdmissionRegistration API
I0330 07:41:26.102] STEP: Having no error when timeout is longer than webhook latency
I0330 07:41:26.103] STEP: Registering slow webhook via the AdmissionRegistration API
I0330 07:41:26.103] STEP: Having no error when timeout is empty (defaulted to 10s in v1)
I0330 07:41:26.103] STEP: Registering slow webhook via the AdmissionRegistration API
I0330 07:41:26.103] [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:26.103]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.103] Mar 30 07:32:51.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.103] STEP: Destroying namespace "webhook-9348" for this suite.
I0330 07:41:26.103] STEP: Destroying namespace "webhook-9348-markers" for this suite.
... skipping 3 lines ...
I0330 07:41:26.104] • [SLOW TEST:15.812 seconds]
I0330 07:41:26.104] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
I0330 07:41:26.104] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:26.104]   should honor timeout [Conformance]
I0330 07:41:26.105]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.105] ------------------------------
I0330 07:41:26.105] {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":340,"completed":321,"skipped":5101,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.105] SSSSSSSSSS
I0330 07:41:26.105] ------------------------------
I0330 07:41:26.105] [sig-storage] ConfigMap 
I0330 07:41:26.105]   binary data should be reflected in volume [NodeConformance] [Conformance]
I0330 07:41:26.105]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.105] [BeforeEach] [sig-storage] ConfigMap
... skipping 15 lines ...
I0330 07:41:26.108] STEP: Waiting for pod with text data
I0330 07:41:26.108] STEP: Waiting for pod with binary data
I0330 07:41:26.108] [AfterEach] [sig-storage] ConfigMap
I0330 07:41:26.108]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.108] Mar 30 07:32:53.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.108] STEP: Destroying namespace "configmap-1504" for this suite.
I0330 07:41:26.108] •{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":340,"completed":322,"skipped":5111,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.109] SSSS
I0330 07:41:26.109] ------------------------------
I0330 07:41:26.109] [sig-storage] EmptyDir volumes 
I0330 07:41:26.109]   should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.109]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.109] [BeforeEach] [sig-storage] EmptyDir volumes
... skipping 8 lines ...
I0330 07:41:26.110] I0330 07:32:53.615529      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.110] I0330 07:32:53.615541      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.111] [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.111]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.111] STEP: Creating a pod to test emptydir 0777 on node default medium
I0330 07:41:26.111] I0330 07:32:53.618505      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.111] Mar 30 07:32:53.627: INFO: Waiting up to 5m0s for pod "pod-6852fbbe-4ae1-4274-a64a-d7bf32f01ada" in namespace "emptydir-9283" to be "Succeeded or Failed"
I0330 07:41:26.111] Mar 30 07:32:53.634: INFO: Pod "pod-6852fbbe-4ae1-4274-a64a-d7bf32f01ada": Phase="Pending", Reason="", readiness=false. Elapsed: 6.393412ms
I0330 07:41:26.112] Mar 30 07:32:55.642: INFO: Pod "pod-6852fbbe-4ae1-4274-a64a-d7bf32f01ada": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.013993971s
I0330 07:41:26.112] STEP: Saw pod success
I0330 07:41:26.112] Mar 30 07:32:55.642: INFO: Pod "pod-6852fbbe-4ae1-4274-a64a-d7bf32f01ada" satisfied condition "Succeeded or Failed"
I0330 07:41:26.112] Mar 30 07:32:55.644: INFO: Trying to get logs from node kind-worker pod pod-6852fbbe-4ae1-4274-a64a-d7bf32f01ada container test-container: <nil>
I0330 07:41:26.112] STEP: delete the pod
I0330 07:41:26.112] Mar 30 07:32:55.658: INFO: Waiting for pod pod-6852fbbe-4ae1-4274-a64a-d7bf32f01ada to disappear
I0330 07:41:26.112] Mar 30 07:32:55.660: INFO: Pod pod-6852fbbe-4ae1-4274-a64a-d7bf32f01ada no longer exists
I0330 07:41:26.112] [AfterEach] [sig-storage] EmptyDir volumes
I0330 07:41:26.112]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.113] Mar 30 07:32:55.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.113] STEP: Destroying namespace "emptydir-9283" for this suite.
I0330 07:41:26.113] •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":323,"skipped":5115,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.113] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:26.113] ------------------------------
I0330 07:41:26.113] [sig-scheduling] SchedulerPredicates [Serial] 
I0330 07:41:26.113]   validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
I0330 07:41:26.114]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.114] [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 45 lines ...
I0330 07:41:26.121] Mar 30 07:37:59.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.121] STEP: Destroying namespace "sched-pred-6024" for this suite.
I0330 07:41:26.121] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0330 07:41:26.121]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
I0330 07:41:26.121] 
I0330 07:41:26.121] • [SLOW TEST:304.166 seconds]
I0330 07:41:26.121] I0330 07:37:59.834894      19 request.go:857] Error in request: resource name may not be empty
I0330 07:41:26.121] [sig-scheduling] SchedulerPredicates [Serial]
I0330 07:41:26.122] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
I0330 07:41:26.122]   validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
I0330 07:41:26.122]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.122] ------------------------------
I0330 07:41:26.123] {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":340,"completed":324,"skipped":5166,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.123] SSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:26.123] ------------------------------
I0330 07:41:26.123] [sig-api-machinery] Garbage collector 
I0330 07:41:26.123]   should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
I0330 07:41:26.123]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.124] [BeforeEach] [sig-api-machinery] Garbage collector
... skipping 12 lines ...
I0330 07:41:26.126]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.127] STEP: create the rc
I0330 07:41:26.127] STEP: delete the rc
I0330 07:41:26.127] STEP: wait for the rc to be deleted
I0330 07:41:26.127] STEP: Gathering metrics
I0330 07:41:26.127] W0330 07:38:05.899810      19 metrics_grabber.go:105] Did not receive an external client interface. Grabbing metrics from ClusterAutoscaler is disabled.
I0330 07:41:26.128] Mar 30 07:39:07.919: INFO: MetricsGrabber failed grab metrics. Skipping metrics gathering.
I0330 07:41:26.128] [AfterEach] [sig-api-machinery] Garbage collector
I0330 07:41:26.128]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.128] Mar 30 07:39:07.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.128] STEP: Destroying namespace "gc-7930" for this suite.
I0330 07:41:26.128] 
I0330 07:41:26.128] • [SLOW TEST:68.096 seconds]
I0330 07:41:26.128] [sig-api-machinery] Garbage collector
I0330 07:41:26.129] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:26.129]   should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
I0330 07:41:26.129]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.129] ------------------------------
I0330 07:41:26.130] {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":340,"completed":325,"skipped":5191,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.130] SSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:26.130] ------------------------------
I0330 07:41:26.130] [sig-storage] ConfigMap 
I0330 07:41:26.130]   should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0330 07:41:26.130]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.130] [BeforeEach] [sig-storage] ConfigMap
... skipping 9 lines ...
I0330 07:41:26.133] I0330 07:39:07.974028      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.133] [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
I0330 07:41:26.133]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.133] I0330 07:39:07.977917      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.134] STEP: Creating configMap with name configmap-test-volume-map-69d08e29-d18f-4f0a-9c4f-ca36c0283737
I0330 07:41:26.134] STEP: Creating a pod to test consume configMaps
I0330 07:41:26.134] Mar 30 07:39:07.991: INFO: Waiting up to 5m0s for pod "pod-configmaps-01b8a049-9351-4ce1-854c-9d2a9847b299" in namespace "configmap-3627" to be "Succeeded or Failed"
I0330 07:41:26.134] Mar 30 07:39:07.996: INFO: Pod "pod-configmaps-01b8a049-9351-4ce1-854c-9d2a9847b299": Phase="Pending", Reason="", readiness=false. Elapsed: 4.517682ms
I0330 07:41:26.134] Mar 30 07:39:10.003: INFO: Pod "pod-configmaps-01b8a049-9351-4ce1-854c-9d2a9847b299": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.01165023s
I0330 07:41:26.134] STEP: Saw pod success
I0330 07:41:26.135] Mar 30 07:39:10.003: INFO: Pod "pod-configmaps-01b8a049-9351-4ce1-854c-9d2a9847b299" satisfied condition "Succeeded or Failed"
I0330 07:41:26.135] Mar 30 07:39:10.006: INFO: Trying to get logs from node kind-worker2 pod pod-configmaps-01b8a049-9351-4ce1-854c-9d2a9847b299 container agnhost-container: <nil>
I0330 07:41:26.135] STEP: delete the pod
I0330 07:41:26.135] Mar 30 07:39:10.041: INFO: Waiting for pod pod-configmaps-01b8a049-9351-4ce1-854c-9d2a9847b299 to disappear
I0330 07:41:26.135] Mar 30 07:39:10.044: INFO: Pod pod-configmaps-01b8a049-9351-4ce1-854c-9d2a9847b299 no longer exists
I0330 07:41:26.135] [AfterEach] [sig-storage] ConfigMap
I0330 07:41:26.135]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.136] Mar 30 07:39:10.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.136] STEP: Destroying namespace "configmap-3627" for this suite.
I0330 07:41:26.136] •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":340,"completed":326,"skipped":5216,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.136] SSSSSSSSSSSSSS
I0330 07:41:26.136] ------------------------------
I0330 07:41:26.136] [sig-node] Probing container 
I0330 07:41:26.136]   with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
I0330 07:41:26.137]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.137] [BeforeEach] [sig-node] Probing container
... skipping 33 lines ...
I0330 07:41:26.142] • [SLOW TEST:22.071 seconds]
I0330 07:41:26.143] [sig-node] Probing container
I0330 07:41:26.143] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/node/framework.go:23
I0330 07:41:26.144]   with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
I0330 07:41:26.144]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.144] ------------------------------
I0330 07:41:26.145] {"msg":"PASSED [sig-node] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":340,"completed":327,"skipped":5230,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.145] SSSSSSSSSSSSSSSSSS
I0330 07:41:26.145] ------------------------------
I0330 07:41:26.146] [sig-cli] Kubectl client Kubectl server-side dry-run 
I0330 07:41:26.146]   should check if kubectl can dry-run update Pods [Conformance]
I0330 07:41:26.146]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.146] [BeforeEach] [sig-cli] Kubectl client
... skipping 34 lines ...
I0330 07:41:26.154] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
I0330 07:41:26.154]   Kubectl server-side dry-run
I0330 07:41:26.154]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:903
I0330 07:41:26.154]     should check if kubectl can dry-run update Pods [Conformance]
I0330 07:41:26.154]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.155] ------------------------------
I0330 07:41:26.155] {"msg":"PASSED [sig-cli] Kubectl client Kubectl server-side dry-run should check if kubectl can dry-run update Pods [Conformance]","total":340,"completed":328,"skipped":5248,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.156] SSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:26.156] ------------------------------
I0330 07:41:26.156] [sig-node] Downward API 
I0330 07:41:26.156]   should provide pod UID as env vars [NodeConformance] [Conformance]
I0330 07:41:26.156]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.156] [BeforeEach] [sig-node] Downward API
... skipping 8 lines ...
I0330 07:41:26.158] I0330 07:39:43.280014      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.159] I0330 07:39:43.280062      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.159] [It] should provide pod UID as env vars [NodeConformance] [Conformance]
I0330 07:41:26.159]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.159] I0330 07:39:43.283237      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.160] STEP: Creating a pod to test downward api env vars
I0330 07:41:26.160] Mar 30 07:39:43.289: INFO: Waiting up to 5m0s for pod "downward-api-a0b29653-3982-4222-96c2-d1f8967cfe45" in namespace "downward-api-4376" to be "Succeeded or Failed"
I0330 07:41:26.160] Mar 30 07:39:43.293: INFO: Pod "downward-api-a0b29653-3982-4222-96c2-d1f8967cfe45": Phase="Pending", Reason="", readiness=false. Elapsed: 3.189898ms
I0330 07:41:26.160] Mar 30 07:39:45.299: INFO: Pod "downward-api-a0b29653-3982-4222-96c2-d1f8967cfe45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009206894s
I0330 07:41:26.161] STEP: Saw pod success
I0330 07:41:26.161] Mar 30 07:39:45.299: INFO: Pod "downward-api-a0b29653-3982-4222-96c2-d1f8967cfe45" satisfied condition "Succeeded or Failed"
I0330 07:41:26.161] Mar 30 07:39:45.302: INFO: Trying to get logs from node kind-worker2 pod downward-api-a0b29653-3982-4222-96c2-d1f8967cfe45 container dapi-container: <nil>
I0330 07:41:26.161] STEP: delete the pod
I0330 07:41:26.161] Mar 30 07:39:45.319: INFO: Waiting for pod downward-api-a0b29653-3982-4222-96c2-d1f8967cfe45 to disappear
I0330 07:41:26.162] Mar 30 07:39:45.322: INFO: Pod downward-api-a0b29653-3982-4222-96c2-d1f8967cfe45 no longer exists
I0330 07:41:26.162] [AfterEach] [sig-node] Downward API
I0330 07:41:26.162]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.162] Mar 30 07:39:45.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.162] STEP: Destroying namespace "downward-api-4376" for this suite.
I0330 07:41:26.163] •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":340,"completed":329,"skipped":5274,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.163] SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:26.163] ------------------------------
I0330 07:41:26.163] [sig-scheduling] SchedulerPredicates [Serial] 
I0330 07:41:26.163]   validates resource limits of pods that are allowed to run  [Conformance]
I0330 07:41:26.164]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.164] [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
... skipping 67 lines ...
I0330 07:41:26.178] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0330 07:41:26.178]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.179] Mar 30 07:39:48.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.179] STEP: Destroying namespace "sched-pred-2801" for this suite.
I0330 07:41:26.179] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
I0330 07:41:26.179]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
I0330 07:41:26.180] •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":340,"completed":330,"skipped":5338,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.180] 
I0330 07:41:26.180] ------------------------------
I0330 07:41:26.180] [sig-storage] Secrets 
I0330 07:41:26.180]   should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.180]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.181] [BeforeEach] [sig-storage] Secrets
I0330 07:41:26.181]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:185
I0330 07:41:26.181] STEP: Creating a kubernetes client
I0330 07:41:26.181] Mar 30 07:39:48.541: INFO: >>> kubeConfig: /tmp/kubeconfig-963336331
I0330 07:41:26.181] I0330 07:39:48.541132      19 request.go:857] Error in request: resource name may not be empty
I0330 07:41:26.181] STEP: Building a namespace api object, basename secrets
I0330 07:41:26.182] I0330 07:39:48.551975      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.182] I0330 07:39:48.552026      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.182] STEP: Waiting for a default service account to be provisioned in namespace
I0330 07:41:26.182] I0330 07:39:48.573358      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.183] I0330 07:39:48.573525      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.183] I0330 07:39:48.573545      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.183] [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.183]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.184] I0330 07:39:48.576446      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.184] STEP: Creating secret with name secret-test-af1da49d-8f91-4e53-bed3-ee3066292c2e
I0330 07:41:26.184] STEP: Creating a pod to test consume secrets
I0330 07:41:26.184] Mar 30 07:39:48.586: INFO: Waiting up to 5m0s for pod "pod-secrets-dd8ec560-ca42-4114-99cd-b6533aa9a933" in namespace "secrets-9762" to be "Succeeded or Failed"
I0330 07:41:26.185] Mar 30 07:39:48.589: INFO: Pod "pod-secrets-dd8ec560-ca42-4114-99cd-b6533aa9a933": Phase="Pending", Reason="", readiness=false. Elapsed: 2.624576ms
I0330 07:41:26.185] Mar 30 07:39:50.595: INFO: Pod "pod-secrets-dd8ec560-ca42-4114-99cd-b6533aa9a933": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.009004491s
I0330 07:41:26.185] STEP: Saw pod success
I0330 07:41:26.185] Mar 30 07:39:50.595: INFO: Pod "pod-secrets-dd8ec560-ca42-4114-99cd-b6533aa9a933" satisfied condition "Succeeded or Failed"
I0330 07:41:26.186] Mar 30 07:39:50.598: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-dd8ec560-ca42-4114-99cd-b6533aa9a933 container secret-volume-test: <nil>
I0330 07:41:26.186] STEP: delete the pod
I0330 07:41:26.186] Mar 30 07:39:50.610: INFO: Waiting for pod pod-secrets-dd8ec560-ca42-4114-99cd-b6533aa9a933 to disappear
I0330 07:41:26.186] Mar 30 07:39:50.612: INFO: Pod pod-secrets-dd8ec560-ca42-4114-99cd-b6533aa9a933 no longer exists
I0330 07:41:26.186] [AfterEach] [sig-storage] Secrets
I0330 07:41:26.187]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.187] Mar 30 07:39:50.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.187] STEP: Destroying namespace "secrets-9762" for this suite.
I0330 07:41:26.188] •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":331,"skipped":5338,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.188] SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:26.188] ------------------------------
I0330 07:41:26.188] [sig-storage] Secrets 
I0330 07:41:26.188]   should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.188]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.189] [BeforeEach] [sig-storage] Secrets
... skipping 9 lines ...
I0330 07:41:26.191] I0330 07:39:50.644617      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.191] [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.191]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.192] STEP: Creating secret with name secret-test-map-d0844639-9a2f-4926-a278-44309c5c2d25
I0330 07:41:26.192] I0330 07:39:50.647439      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.192] STEP: Creating a pod to test consume secrets
I0330 07:41:26.192] Mar 30 07:39:50.657: INFO: Waiting up to 5m0s for pod "pod-secrets-ccf1db69-8415-4fe1-ab3a-54827236b5cc" in namespace "secrets-6713" to be "Succeeded or Failed"
I0330 07:41:26.193] Mar 30 07:39:50.667: INFO: Pod "pod-secrets-ccf1db69-8415-4fe1-ab3a-54827236b5cc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.464381ms
I0330 07:41:26.193] Mar 30 07:39:52.675: INFO: Pod "pod-secrets-ccf1db69-8415-4fe1-ab3a-54827236b5cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.017888565s
I0330 07:41:26.193] STEP: Saw pod success
I0330 07:41:26.193] Mar 30 07:39:52.675: INFO: Pod "pod-secrets-ccf1db69-8415-4fe1-ab3a-54827236b5cc" satisfied condition "Succeeded or Failed"
I0330 07:41:26.193] Mar 30 07:39:52.677: INFO: Trying to get logs from node kind-worker2 pod pod-secrets-ccf1db69-8415-4fe1-ab3a-54827236b5cc container secret-volume-test: <nil>
I0330 07:41:26.194] STEP: delete the pod
I0330 07:41:26.194] Mar 30 07:39:52.694: INFO: Waiting for pod pod-secrets-ccf1db69-8415-4fe1-ab3a-54827236b5cc to disappear
I0330 07:41:26.194] Mar 30 07:39:52.697: INFO: Pod pod-secrets-ccf1db69-8415-4fe1-ab3a-54827236b5cc no longer exists
I0330 07:41:26.194] [AfterEach] [sig-storage] Secrets
I0330 07:41:26.194]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.195] Mar 30 07:39:52.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.195] STEP: Destroying namespace "secrets-6713" for this suite.
I0330 07:41:26.195] •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":332,"skipped":5367,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.195] SS
I0330 07:41:26.195] ------------------------------
I0330 07:41:26.196] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
I0330 07:41:26.196]   watch on custom resource definition objects [Conformance]
I0330 07:41:26.196]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.196] [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
... skipping 33 lines ...
I0330 07:41:26.207] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
I0330 07:41:26.207]   CustomResourceDefinition Watch
I0330 07:41:26.207]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
I0330 07:41:26.207]     watch on custom resource definition objects [Conformance]
I0330 07:41:26.208]     /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.208] ------------------------------
I0330 07:41:26.208] {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":340,"completed":333,"skipped":5369,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.208] SSSSSSSS
I0330 07:41:26.209] ------------------------------
I0330 07:41:26.209] [sig-apps] ReplicationController 
I0330 07:41:26.209]   should serve a basic image on each replica with a public image  [Conformance]
I0330 07:41:26.209]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.209] [BeforeEach] [sig-apps] ReplicationController
... skipping 27 lines ...
I0330 07:41:26.216] • [SLOW TEST:10.088 seconds]
I0330 07:41:26.217] [sig-apps] ReplicationController
I0330 07:41:26.217] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
I0330 07:41:26.217]   should serve a basic image on each replica with a public image  [Conformance]
I0330 07:41:26.217]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.217] ------------------------------
I0330 07:41:26.218] {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":340,"completed":334,"skipped":5377,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.218] SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
I0330 07:41:26.218] ------------------------------
I0330 07:41:26.218] [sig-storage] Downward API volume 
I0330 07:41:26.218]   should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.219]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.219] [BeforeEach] [sig-storage] Downward API volume
... skipping 10 lines ...
I0330 07:41:26.221] I0330 07:41:06.003997      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.221] [BeforeEach] [sig-storage] Downward API volume
I0330 07:41:26.221]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/storage/downwardapi_volume.go:41
I0330 07:41:26.221] [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
I0330 07:41:26.221]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.221] STEP: Creating a pod to test downward API volume plugin
I0330 07:41:26.222] Mar 30 07:41:06.015: INFO: Waiting up to 5m0s for pod "downwardapi-volume-428e1b25-9c02-4759-b977-80a63e1f7c77" in namespace "downward-api-9255" to be "Succeeded or Failed"
I0330 07:41:26.222] Mar 30 07:41:06.018: INFO: Pod "downwardapi-volume-428e1b25-9c02-4759-b977-80a63e1f7c77": Phase="Pending", Reason="", readiness=false. Elapsed: 3.537364ms
I0330 07:41:26.222] Mar 30 07:41:08.023: INFO: Pod "downwardapi-volume-428e1b25-9c02-4759-b977-80a63e1f7c77": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.008263517s
I0330 07:41:26.222] STEP: Saw pod success
I0330 07:41:26.222] Mar 30 07:41:08.023: INFO: Pod "downwardapi-volume-428e1b25-9c02-4759-b977-80a63e1f7c77" satisfied condition "Succeeded or Failed"
I0330 07:41:26.222] Mar 30 07:41:08.026: INFO: Trying to get logs from node kind-worker2 pod downwardapi-volume-428e1b25-9c02-4759-b977-80a63e1f7c77 container client-container: <nil>
I0330 07:41:26.222] STEP: delete the pod
I0330 07:41:26.223] Mar 30 07:41:08.044: INFO: Waiting for pod downwardapi-volume-428e1b25-9c02-4759-b977-80a63e1f7c77 to disappear
I0330 07:41:26.223] Mar 30 07:41:08.047: INFO: Pod downwardapi-volume-428e1b25-9c02-4759-b977-80a63e1f7c77 no longer exists
I0330 07:41:26.223] [AfterEach] [sig-storage] Downward API volume
I0330 07:41:26.223]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.223] Mar 30 07:41:08.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.223] STEP: Destroying namespace "downward-api-9255" for this suite.
I0330 07:41:26.224] •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":340,"completed":335,"skipped":5406,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.224] S
I0330 07:41:26.224] ------------------------------
I0330 07:41:26.224] [sig-storage] Secrets 
I0330 07:41:26.224]   should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
I0330 07:41:26.224]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.224] [BeforeEach] [sig-storage] Secrets
... skipping 9 lines ...
I0330 07:41:26.226] I0330 07:41:08.081606      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.226] [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
I0330 07:41:26.226]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.227] I0330 07:41:08.084821      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.227] STEP: Creating secret with name secret-test-96be08ea-af94-4e79-89a5-d0362ad35b32
I0330 07:41:26.227] STEP: Creating a pod to test consume secrets
I0330 07:41:26.227] Mar 30 07:41:08.095: INFO: Waiting up to 5m0s for pod "pod-secrets-da0564d9-a1d4-4467-8a88-c6532779e0c6" in namespace "secrets-709" to be "Succeeded or Failed"
I0330 07:41:26.227] Mar 30 07:41:08.101: INFO: Pod "pod-secrets-da0564d9-a1d4-4467-8a88-c6532779e0c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.395715ms
I0330 07:41:26.227] Mar 30 07:41:10.108: INFO: Pod "pod-secrets-da0564d9-a1d4-4467-8a88-c6532779e0c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.012549457s
I0330 07:41:26.228] STEP: Saw pod success
I0330 07:41:26.228] Mar 30 07:41:10.108: INFO: Pod "pod-secrets-da0564d9-a1d4-4467-8a88-c6532779e0c6" satisfied condition "Succeeded or Failed"
I0330 07:41:26.228] Mar 30 07:41:10.111: INFO: Trying to get logs from node kind-worker pod pod-secrets-da0564d9-a1d4-4467-8a88-c6532779e0c6 container secret-volume-test: <nil>
I0330 07:41:26.228] STEP: delete the pod
I0330 07:41:26.228] Mar 30 07:41:10.143: INFO: Waiting for pod pod-secrets-da0564d9-a1d4-4467-8a88-c6532779e0c6 to disappear
I0330 07:41:26.228] Mar 30 07:41:10.146: INFO: Pod pod-secrets-da0564d9-a1d4-4467-8a88-c6532779e0c6 no longer exists
I0330 07:41:26.228] [AfterEach] [sig-storage] Secrets
I0330 07:41:26.228]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.229] Mar 30 07:41:10.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.229] STEP: Destroying namespace "secrets-709" for this suite.
I0330 07:41:26.229] •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":340,"completed":336,"skipped":5407,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.229] S
I0330 07:41:26.229] ------------------------------
I0330 07:41:26.229] [sig-network] Services 
I0330 07:41:26.229]   should serve a basic endpoint from pods  [Conformance]
I0330 07:41:26.229]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.229] [BeforeEach] [sig-network] Services
... skipping 11 lines ...
I0330 07:41:26.231]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:746
I0330 07:41:26.231] I0330 07:41:10.193748      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.231] [It] should serve a basic endpoint from pods  [Conformance]
I0330 07:41:26.232]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.232] STEP: creating service endpoint-test2 in namespace services-3878
I0330 07:41:26.232] STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3878 to expose endpoints map[]
I0330 07:41:26.232] Mar 30 07:41:10.213: INFO: Failed go get Endpoints object: endpoints "endpoint-test2" not found
I0330 07:41:26.232] Mar 30 07:41:11.223: INFO: successfully validated that service endpoint-test2 in namespace services-3878 exposes endpoints map[]
I0330 07:41:26.232] STEP: Creating pod pod1 in namespace services-3878
I0330 07:41:26.232] Mar 30 07:41:11.234: INFO: The status of Pod pod1 is Pending, waiting for it to be Running (with Ready = true)
I0330 07:41:26.232] Mar 30 07:41:13.243: INFO: The status of Pod pod1 is Running (Ready = true)
I0330 07:41:26.233] STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3878 to expose endpoints map[pod1:[80]]
I0330 07:41:26.233] Mar 30 07:41:13.259: INFO: successfully validated that service endpoint-test2 in namespace services-3878 exposes endpoints map[pod1:[80]]
... skipping 18 lines ...
I0330 07:41:26.236] • [SLOW TEST:5.270 seconds]
I0330 07:41:26.236] [sig-network] Services
I0330 07:41:26.236] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/common/framework.go:23
I0330 07:41:26.236]   should serve a basic endpoint from pods  [Conformance]
I0330 07:41:26.236]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.236] ------------------------------
I0330 07:41:26.237] {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":340,"completed":337,"skipped":5408,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.237] SSSSSS
I0330 07:41:26.237] ------------------------------
I0330 07:41:26.237] [sig-storage] ConfigMap 
I0330 07:41:26.237]   should be immutable if `immutable` field is set [Conformance]
I0330 07:41:26.237]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.237] [BeforeEach] [sig-storage] ConfigMap
... skipping 11 lines ...
I0330 07:41:26.239]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.239] I0330 07:41:15.488728      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.239] [AfterEach] [sig-storage] ConfigMap
I0330 07:41:26.239]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.239] Mar 30 07:41:15.553: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.239] STEP: Destroying namespace "configmap-9928" for this suite.
I0330 07:41:26.240] •{"msg":"PASSED [sig-storage] ConfigMap should be immutable if `immutable` field is set [Conformance]","total":340,"completed":338,"skipped":5414,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.240] SSSSSSSSSSSSSSSSS
I0330 07:41:26.240] ------------------------------
I0330 07:41:26.240] [sig-node] Variable Expansion 
I0330 07:41:26.240]   should allow substituting values in a container's args [NodeConformance] [Conformance]
I0330 07:41:26.240]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.240] [BeforeEach] [sig-node] Variable Expansion
... skipping 8 lines ...
I0330 07:41:26.241] I0330 07:41:15.605896      19 reflector.go:219] Starting reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.242] I0330 07:41:15.606010      19 reflector.go:255] Listing and watching *v1.ServiceAccount from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.242] I0330 07:41:15.611434      19 reflector.go:225] Stopping reflector *v1.ServiceAccount (0s) from k8s.io/client-go/tools/watch/informerwatcher.go:146
I0330 07:41:26.242] [It] should allow substituting values in a container's args [NodeConformance] [Conformance]
I0330 07:41:26.242]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:630
I0330 07:41:26.242] STEP: Creating a pod to test substitution in container's args
I0330 07:41:26.242] Mar 30 07:41:15.622: INFO: Waiting up to 5m0s for pod "var-expansion-db5e3bc5-fe67-4b5b-868e-73cb1a1699ac" in namespace "var-expansion-5474" to be "Succeeded or Failed"
I0330 07:41:26.242] Mar 30 07:41:15.632: INFO: Pod "var-expansion-db5e3bc5-fe67-4b5b-868e-73cb1a1699ac": Phase="Pending", Reason="", readiness=false. Elapsed: 9.267485ms
I0330 07:41:26.243] Mar 30 07:41:17.637: INFO: Pod "var-expansion-db5e3bc5-fe67-4b5b-868e-73cb1a1699ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 2.014850763s
I0330 07:41:26.243] STEP: Saw pod success
I0330 07:41:26.243] Mar 30 07:41:17.637: INFO: Pod "var-expansion-db5e3bc5-fe67-4b5b-868e-73cb1a1699ac" satisfied condition "Succeeded or Failed"
I0330 07:41:26.243] Mar 30 07:41:17.640: INFO: Trying to get logs from node kind-worker2 pod var-expansion-db5e3bc5-fe67-4b5b-868e-73cb1a1699ac container dapi-container: <nil>
I0330 07:41:26.243] STEP: delete the pod
I0330 07:41:26.243] Mar 30 07:41:17.657: INFO: Waiting for pod var-expansion-db5e3bc5-fe67-4b5b-868e-73cb1a1699ac to disappear
I0330 07:41:26.243] Mar 30 07:41:17.661: INFO: Pod var-expansion-db5e3bc5-fe67-4b5b-868e-73cb1a1699ac no longer exists
I0330 07:41:26.243] [AfterEach] [sig-node] Variable Expansion
I0330 07:41:26.244]   /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:186
I0330 07:41:26.244] Mar 30 07:41:17.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0330 07:41:26.244] STEP: Destroying namespace "var-expansion-5474" for this suite.
I0330 07:41:26.244] •{"msg":"PASSED [sig-node] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":340,"completed":339,"skipped":5431,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.244] SMar 30 07:41:17.670: INFO: Running AfterSuite actions on all nodes
I0330 07:41:26.244] Mar 30 07:41:17.670: INFO: Running AfterSuite actions on node 1
I0330 07:41:26.244] Mar 30 07:41:17.670: INFO: Skipping dumping logs from cluster
I0330 07:41:26.245] 
I0330 07:41:26.245] JUnit report was created: /tmp/results/junit_01.xml
I0330 07:41:26.245] {"msg":"Test Suite completed","total":340,"completed":339,"skipped":5432,"failed":1,"failures":["[sig-apps] Daemon set [Serial] should list all daemon and delete a collection of daemons with a label selector [Conformance]"]}
I0330 07:41:26.245] 
I0330 07:41:26.245] 
I0330 07:41:26.245] Summarizing 1 Failure:
I0330 07:41:26.245] 
I0330 07:41:26.245] [Fail] [sig-apps] Daemon set [Serial] [It] should list all daemon and delete a collection of daemons with a label selector [Conformance] 
I0330 07:41:26.245] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:201
I0330 07:41:26.245] 
I0330 07:41:26.246] Ran 340 of 5772 Specs in 6345.921 seconds
I0330 07:41:26.246] FAIL! -- 339 Passed | 1 Failed | 0 Pending | 5432 Skipped
I0330 07:41:26.246] --- FAIL: TestE2E (6345.97s)
I0330 07:41:26.246] FAIL
I0330 07:41:26.246] 
I0330 07:41:26.246] Ginkgo ran 1 suite in 1h45m47.550117666s
I0330 07:41:26.246] Test Suite Failed
I0330 07:41:26.246] + ret=1
I0330 07:41:26.246] + set +x
W0330 07:41:26.346] + cleanup
W0330 07:41:26.347] + kind export logs /workspace/_artifacts/logs
W0330 07:41:26.364] Exporting logs for cluster "kind" to:
I0330 07:41:26.465] /workspace/_artifacts/logs
... skipping 10 lines ...
W0330 07:41:35.113]     check(*cmd)
W0330 07:41:35.114]   File "/workspace/./test-infra/jenkins/../scenarios/execute.py", line 30, in check
W0330 07:41:35.114]     subprocess.check_call(cmd)
W0330 07:41:35.114]   File "/usr/lib/python2.7/subprocess.py", line 190, in check_call
W0330 07:41:35.114]     raise CalledProcessError(retcode, cmd)
W0330 07:41:35.115] subprocess.CalledProcessError: Command '('bash', '-c', 'cd ./../../k8s.io/kubernetes && source ./../test-infra/experiment/kind-conformance-image-e2e.sh')' returned non-zero exit status 1
E0330 07:41:35.130] Command failed
I0330 07:41:35.130] process 695 exited with code 1 after 123.9m
E0330 07:41:35.131] FAIL: pull-kubernetes-conformance-image-test
I0330 07:41:35.132] Call:  gcloud auth activate-service-account --key-file=/etc/service-account/service-account.json
W0330 07:41:35.813] Activated service account credentials for: [pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com]
I0330 07:41:35.936] process 305382 exited with code 0 after 0.0m
I0330 07:41:35.936] Call:  gcloud config get-value account
I0330 07:41:36.504] process 305395 exited with code 0 after 0.0m
I0330 07:41:36.504] Will upload results to gs://kubernetes-jenkins/pr-logs using pr-kubekins@kubernetes-jenkins-pull.iam.gserviceaccount.com
I0330 07:41:36.504] Upload result and artifacts...
I0330 07:41:36.504] Gubernator results at https://gubernator.k8s.io/build/kubernetes-jenkins/pr-logs/pull/100603/pull-kubernetes-conformance-image-test/1376770209968295936
I0330 07:41:36.505] Call:  gsutil ls gs://kubernetes-jenkins/pr-logs/pull/100603/pull-kubernetes-conformance-image-test/1376770209968295936/artifacts
W0330 07:41:37.767] CommandException: One or more URLs matched no objects.
E0330 07:41:37.990] Command failed
I0330 07:41:37.991] process 305408 exited with code 1 after 0.0m
W0330 07:41:37.991] Remote dir gs://kubernetes-jenkins/pr-logs/pull/100603/pull-kubernetes-conformance-image-test/1376770209968295936/artifacts not exist yet
I0330 07:41:37.991] Call:  gsutil -m -q -o GSUtil:use_magicfile=True cp -r -c -z log,txt,xml /workspace/_artifacts gs://kubernetes-jenkins/pr-logs/pull/100603/pull-kubernetes-conformance-image-test/1376770209968295936/artifacts
I0330 07:41:40.864] process 305555 exited with code 0 after 0.0m
W0330 07:41:40.868] metadata path /workspace/_artifacts/metadata.json does not exist
W0330 07:41:40.868] metadata not found or invalid, init with empty metadata
... skipping 22 lines ...