PR | draveness: feat: update taint nodes by condition to GA |
Result | FAILURE |
Tests | 21 failed / 311 succeeded |
Started | |
Elapsed | 40m3s |
Revision | |
Builder | gke-prow-ssd-pool-1a225945-bfs0 |
Refs |
master:ebd8f9cc 82703:32e67c2e |
pod | 76b17cdd-d867-11e9-af7a-7ecbb7a97bb8 |
infra-commit | e1cbc3ccd |
job-version | v1.17.0-alpha.0.1445+4640b4f81ec6bc |
pod | 76b17cdd-d867-11e9-af7a-7ecbb7a97bb8 |
repo | k8s.io/kubernetes |
repo-commit | 4640b4f81ec6bcaac176111279f6d50529ab2cf5 |
repos | {u'k8s.io/kubernetes': u'master:ebd8f9ccb5c7a7f54f636db3a8a7dc1397046be6,82703:32e67c2e90fd5f25227992a421949001aa6f8fae'} |
revision | v1.17.0-alpha.0.1445+4640b4f81ec6bc |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sSummary\sAPI\s\[NodeConformance\]\swhen\squerying\s\/stats\/summary\sshould\sreport\sresource\susage\sthrough\sthe\sstats\sapi$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:53 Unexpected number of node objects for node e2e. Expects only one node. Expected <int>: 0 to equal <int>: 1 /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1332from junit_cos-stable_05.xml
[BeforeEach] [k8s.io] Summary API [NodeConformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename summary-test Sep 16 10:00:44.099: INFO: Skipping waiting for service account [It] should report resource usage through the stats api _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:53 �[1mSTEP�[0m: Creating test pods Sep 16 10:01:03.172: INFO: Unexpected unequal occurred: 0 and 1 goroutine 229 [running]: runtime/debug.Stack(0x4, 0x4dfde69, 0x2) /usr/local/go/src/runtime/debug/stack.go:24 +0x9d runtime/debug.PrintStack() /usr/local/go/src/runtime/debug/stack.go:16 +0x22 k8s.io/kubernetes/test/e2e/framework.ExpectEqual(0x4235c80, 0xbea4d00, 0x4235c80, 0x845da80, 0xc000767110, 0x1, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1330 +0x27f k8s.io/kubernetes/test/e2e_node.getLocalNode(0xc0006b7040, 0x85ac280) _output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:350 +0xd2 k8s.io/kubernetes/test/e2e_node.glob..func43.1.2() _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:81 +0x28c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000de5140, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0x9c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000de5140, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc000584f80, 0x8543d80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00101fef0, 0x0, 0x8543d80, 0xc0001ed4c0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x596 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00101fef0, 0x8543d80, 0xc0001ed4c0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf4 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000370500, 0xc00101fef0, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000370500, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x124 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000370500, 0xc00063b7c0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0002171d0, 0x7febbb27e520, 0xc000d38e00, 0x4e143c7, 0xd, 0xc0006434c0, 0x2, 0x2, 0x8609300, 0xc0001ed4c0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x85492c0, 0xc000d38e00, 0x4e143c7, 0xd, 0xc000643480, 0x2, 0x2, 0x2) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:221 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x85492c0, 0xc000d38e00, 0x4e143c7, 0xd, 0xc0005ea400, 0x1, 0x1, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:209 +0xad k8s.io/kubernetes/test/e2e_node.TestE2eNode(0xc000d38e00) _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:148 +0x3db testing.tRunner(0xc000d38e00, 0x4fe3568) /usr/local/go/src/testing/testing.go:865 +0xc0 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:916 +0x35a [AfterEach] when querying /stats/summary _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:43 Sep 16 10:01:03.200: INFO: Running kubectl logs on non-ready containers in summary-test-37 �[1mSTEP�[0m: Recording processes in system cgroups Sep 16 10:01:03.205: INFO: Processes in kubelet cgroup (/kubelet.slice): Sep 16 10:01:03.205: INFO: /tmp/node-e2e-20190916T095806/kubelet�--kubeconfig�/tmp/node-e2e-20190916T095806/kubeconfig�--root-dir�/var/lib/kubelet�--v�4�--logtostderr�--dynamic-config-dir�/tmp/node-e2e-20190916T095806/dynamic-kubelet-config�--network-plugin=kubenet�--cni-bin-dir�/tmp/node-e2e-20190916T095806/cni/bin�--cni-conf-dir�/tmp/node-e2e-20190916T095806/cni/net.d�--cni-cache-dir�/tmp/node-e2e-20190916T095806/cni/cache�--hostname-override�tmp-node-e2e-d8aaa33e-cos-73-11647-293-0�--container-runtime�remote�--container-runtime-endpoint�unix:///run/containerd/containerd.sock�--config�/tmp/node-e2e-20190916T095806/kubelet-config�--experimental-mounter-path=/tmp/node-e2e-20190916T095806/mounter�--experimental-kernel-memcg-notification=true�--cgroups-per-qos=true�--cgroup-root=/�--runtime-cgroups=/system.slice/containerd.service� Sep 16 10:01:03.205: INFO: Skipping unconfigured cgroup misc [AfterEach] [k8s.io] Summary API [NodeConformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "summary-test-37". �[1mSTEP�[0m: Found 6 events. Sep 16 10:01:03.211: INFO: At 2019-09-16 10:00:44 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Sep 16 10:01:03.211: INFO: At 2019-09-16 10:00:44 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Sep 16 10:01:03.211: INFO: At 2019-09-16 10:00:45 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Created: Created container busybox-container Sep 16 10:01:03.211: INFO: At 2019-09-16 10:00:45 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Started: Started container busybox-container Sep 16 10:01:03.211: INFO: At 2019-09-16 10:00:45 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Created: Created container busybox-container Sep 16 10:01:03.211: INFO: At 2019-09-16 10:00:45 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Started: Started container busybox-container Sep 16 10:01:03.215: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:01:03.215: INFO: stats-busybox-0 tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:00:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:00:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:00:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:00:44 +0000 UTC }] Sep 16 10:01:03.215: INFO: stats-busybox-1 tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:00:44 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:00:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:00:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:00:44 +0000 UTC }] Sep 16 10:01:03.215: INFO: Sep 16 10:01:03.220: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:01:03.225: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 /api/v1/nodes/tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 de1de0c3-ce1d-44d6-bb56-2e7839e24f20 88 0 2019-09-16 10:00:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878486016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616342016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:00:07 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:00:07 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:00:07 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:00:07 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.83,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a064d78421e9c8eac5e80fe5da19fb15,SystemUUID:A064D784-21E9-C8EA-C5E8-0FE5DA19FB15,BootID:821ee2ea-1060-424d-9835-b7cdc9159dc8,KernelVersion:4.14.138+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.2.8,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:01:03.225: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:01:03.227: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:01:03.234: INFO: pod-configmaps-f79a6d64-c41f-4d02-b6ec-33cb696e168b started at 2019-09-16 10:01:01 +0000 UTC (0+1 container statuses recorded) Sep 16 10:01:03.234: INFO: Container env-test ready: false, restart count 0 Sep 16 10:01:03.234: INFO: stats-busybox-1 started at 2019-09-16 10:00:44 +0000 UTC (0+1 container statuses recorded) Sep 16 10:01:03.234: INFO: Container busybox-container ready: true, restart count 1 Sep 16 10:01:03.234: INFO: liveness-705bfbae-aa80-4713-a3d3-683227d78372 started at 2019-09-16 10:00:49 +0000 UTC (0+1 container statuses recorded) Sep 16 10:01:03.234: INFO: Container liveness ready: true, restart count 0 Sep 16 10:01:03.234: INFO: busybox-fd1f3054-d934-48c2-ae64-9eb541a489f4 started at 2019-09-16 10:00:09 +0000 UTC (0+1 container statuses recorded) Sep 16 10:01:03.234: INFO: Container busybox ready: true, restart count 0 Sep 16 10:01:03.234: INFO: pod-configmaps-41a75e7f-4ee4-4b71-a1aa-720adf443e21 started at 2019-09-16 10:00:56 +0000 UTC (0+3 container statuses recorded) Sep 16 10:01:03.234: INFO: Container createcm-volume-test ready: true, restart count 0 Sep 16 10:01:03.234: INFO: Container delcm-volume-test ready: true, restart count 0 Sep 16 10:01:03.235: INFO: Container updcm-volume-test ready: true, restart count 0 Sep 16 10:01:03.235: INFO: stats-busybox-0 started at 2019-09-16 10:00:44 +0000 UTC (0+1 container statuses recorded) Sep 16 10:01:03.235: INFO: Container busybox-container ready: true, restart count 1 Sep 16 10:01:03.235: INFO: liveness-8ef43f12-ebaa-4e59-866c-526f7f98e410 started at 2019-09-16 10:00:44 +0000 UTC (0+1 container statuses recorded) Sep 16 10:01:03.235: INFO: Container liveness ready: true, restart count 0 Sep 16 10:01:03.235: INFO: client-containers-09c2dba6-a2b6-47c9-8379-18633273452f started at 2019-09-16 10:01:01 +0000 UTC (0+1 container statuses recorded) Sep 16 10:01:03.235: INFO: Container test-container ready: true, restart count 0 W0916 10:01:03.237609 995 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:01:03.558: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:01:03.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "summary-test-37" for this suite. Sep 16 10:01:47.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:01:47.880: INFO: namespace summary-test-37 deletion completed in 44.314379631s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sSummary\sAPI\s\[NodeConformance\]\swhen\squerying\s\/stats\/summary\sshould\sreport\sresource\susage\sthrough\sthe\sstats\sapi$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:53 Unexpected number of node objects for node e2e. Expects only one node. Expected <int>: 0 to equal <int>: 1 /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1332from junit_ubuntu_06.xml
[BeforeEach] [k8s.io] Summary API [NodeConformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename summary-test Sep 16 10:01:53.113: INFO: Skipping waiting for service account [It] should report resource usage through the stats api _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:53 �[1mSTEP�[0m: Creating test pods Sep 16 10:02:15.189: INFO: Unexpected unequal occurred: 0 and 1 goroutine 229 [running]: runtime/debug.Stack(0x4, 0x4dfde69, 0x2) /usr/local/go/src/runtime/debug/stack.go:24 +0x9d runtime/debug.PrintStack() /usr/local/go/src/runtime/debug/stack.go:16 +0x22 k8s.io/kubernetes/test/e2e/framework.ExpectEqual(0x4235c80, 0xbea4d00, 0x4235c80, 0x845da80, 0xc000bff090, 0x1, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1330 +0x27f k8s.io/kubernetes/test/e2e_node.getLocalNode(0xc0000e9040, 0x85ac280) _output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:350 +0xd2 k8s.io/kubernetes/test/e2e_node.glob..func43.1.2() _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:81 +0x28c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00022d080, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0x9c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00022d080, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc00069f1c0, 0x8543d80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc000fc1680, 0x0, 0x8543d80, 0xc0001ef4c0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x596 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc000fc1680, 0x8543d80, 0xc0001ef4c0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf4 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000e8c3c0, 0xc000fc1680, 0x0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000e8c3c0, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x124 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000e8c3c0, 0xc0005b9f10) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0002191d0, 0x7f2ddc8864d0, 0xc000f3ac00, 0x4e143c7, 0xd, 0xc000626ca0, 0x2, 0x2, 0x8609300, 0xc0001ef4c0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x85492c0, 0xc000f3ac00, 0x4e143c7, 0xd, 0xc000626c80, 0x2, 0x2, 0x2) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:221 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x85492c0, 0xc000f3ac00, 0x4e143c7, 0xd, 0xc0005e2520, 0x1, 0x1, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:209 +0xad k8s.io/kubernetes/test/e2e_node.TestE2eNode(0xc000f3ac00) _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:148 +0x3db testing.tRunner(0xc000f3ac00, 0x4fe3568) /usr/local/go/src/testing/testing.go:865 +0xc0 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:916 +0x35a [AfterEach] when querying /stats/summary _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:43 Sep 16 10:02:15.197: INFO: Running kubectl logs on non-ready containers in summary-test-5686 �[1mSTEP�[0m: Recording processes in system cgroups Sep 16 10:02:15.205: INFO: Skipping unconfigured cgroup misc Sep 16 10:02:15.205: INFO: Processes in kubelet cgroup (/kubelet.slice): Sep 16 10:02:15.205: INFO: /tmp/node-e2e-20190916T095806/kubelet�--kubeconfig�/tmp/node-e2e-20190916T095806/kubeconfig�--root-dir�/var/lib/kubelet�--v�4�--logtostderr�--dynamic-config-dir�/tmp/node-e2e-20190916T095806/dynamic-kubelet-config�--network-plugin=kubenet�--cni-bin-dir�/tmp/node-e2e-20190916T095806/cni/bin�--cni-conf-dir�/tmp/node-e2e-20190916T095806/cni/net.d�--cni-cache-dir�/tmp/node-e2e-20190916T095806/cni/cache�--hostname-override�tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913�--container-runtime�remote�--container-runtime-endpoint�unix:///run/containerd/containerd.sock�--config�/tmp/node-e2e-20190916T095806/kubelet-config�--experimental-kernel-memcg-notification=true�--cgroups-per-qos=true�--cgroup-root=/�--runtime-cgroups=/system.slice/containerd.service� [AfterEach] [k8s.io] Summary API [NodeConformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "summary-test-5686". �[1mSTEP�[0m: Found 6 events. Sep 16 10:02:15.209: INFO: At 2019-09-16 10:01:53 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Sep 16 10:02:15.209: INFO: At 2019-09-16 10:01:54 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Created: Created container busybox-container Sep 16 10:02:15.209: INFO: At 2019-09-16 10:01:54 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Started: Started container busybox-container Sep 16 10:02:15.209: INFO: At 2019-09-16 10:01:54 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Sep 16 10:02:15.209: INFO: At 2019-09-16 10:01:54 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Created: Created container busybox-container Sep 16 10:02:15.209: INFO: At 2019-09-16 10:01:54 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Started: Started container busybox-container Sep 16 10:02:15.214: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:02:15.214: INFO: stats-busybox-0 tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:53 +0000 UTC }] Sep 16 10:02:15.214: INFO: stats-busybox-1 tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:56 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:56 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:53 +0000 UTC }] Sep 16 10:02:15.214: INFO: Sep 16 10:02:15.218: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:02:15.225: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 /api/v1/nodes/tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 e06930a8-592f-4099-a9d5-edfc75a6bb47 908 0 2019-09-16 10:00:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872014336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3609870336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:02:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:02:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:02:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:02:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.84,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f51035bcd9db8910e22af6d51a902fac,SystemUUID:F51035BC-D9DB-8910-E22A-F6D51A902FAC,BootID:9e56941a-9d7d-4215-b61f-bae77a3d8412,KernelVersion:4.15.0-1042-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.2.7,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:02:15.264: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:02:15.265: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:02:15.268: INFO: stats-busybox-0 started at 2019-09-16 10:01:53 +0000 UTC (0+1 container statuses recorded) Sep 16 10:02:15.268: INFO: Container busybox-container ready: true, restart count 1 Sep 16 10:02:15.268: INFO: stats-busybox-1 started at 2019-09-16 10:01:53 +0000 UTC (0+1 container statuses recorded) Sep 16 10:02:15.268: INFO: Container busybox-container ready: true, restart count 1 Sep 16 10:02:15.268: INFO: test-webserver-2986c8d3-c790-49c4-89f7-f36787a0e04b started at 2019-09-16 10:02:01 +0000 UTC (0+1 container statuses recorded) Sep 16 10:02:15.268: INFO: Container test-webserver ready: false, restart count 0 W0916 10:02:15.270722 2882 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:02:15.471: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:02:15.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "summary-test-5686" for this suite. Sep 16 10:03:01.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:03:01.551: INFO: namespace summary-test-5686 deletion completed in 46.077440316s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sSummary\sAPI\s\[NodeConformance\]\swhen\squerying\s\/stats\/summary\sshould\sreport\sresource\susage\sthrough\sthe\sstats\sapi$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:53 Unexpected number of node objects for node e2e. Expects only one node. Expected <int>: 0 to equal <int>: 1 /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1332from junit_ubuntu_06.xml
[BeforeEach] [k8s.io] Summary API [NodeConformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename summary-test Sep 16 10:03:01.556: INFO: Skipping waiting for service account [It] should report resource usage through the stats api _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:53 �[1mSTEP�[0m: Creating test pods Sep 16 10:03:23.587: INFO: Unexpected unequal occurred: 0 and 1 goroutine 229 [running]: runtime/debug.Stack(0x4, 0x4dfde69, 0x2) /usr/local/go/src/runtime/debug/stack.go:24 +0x9d runtime/debug.PrintStack() /usr/local/go/src/runtime/debug/stack.go:16 +0x22 k8s.io/kubernetes/test/e2e/framework.ExpectEqual(0x4235c80, 0xbea4d00, 0x4235c80, 0x845da80, 0xc0011cdbc0, 0x1, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1330 +0x27f k8s.io/kubernetes/test/e2e_node.getLocalNode(0xc0000e9040, 0x85ac280) _output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:350 +0xd2 k8s.io/kubernetes/test/e2e_node.glob..func43.1.2() _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:81 +0x28c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc00022d080, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0x9c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc00022d080, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc00069f1c0, 0x8543d80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc000fc1680, 0x0, 0x8543d80, 0xc0001ef4c0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x596 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc000fc1680, 0x8543d80, 0xc0001ef4c0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf4 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000e8c3c0, 0xc000fc1680, 0x0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000e8c3c0, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x124 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000e8c3c0, 0xc0005b9f10) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0002191d0, 0x7f2ddc8864d0, 0xc000f3ac00, 0x4e143c7, 0xd, 0xc000626ca0, 0x2, 0x2, 0x8609300, 0xc0001ef4c0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x85492c0, 0xc000f3ac00, 0x4e143c7, 0xd, 0xc000626c80, 0x2, 0x2, 0x2) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:221 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x85492c0, 0xc000f3ac00, 0x4e143c7, 0xd, 0xc0005e2520, 0x1, 0x1, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:209 +0xad k8s.io/kubernetes/test/e2e_node.TestE2eNode(0xc000f3ac00) _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:148 +0x3db testing.tRunner(0xc000f3ac00, 0x4fe3568) /usr/local/go/src/testing/testing.go:865 +0xc0 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:916 +0x35a [AfterEach] when querying /stats/summary _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:43 Sep 16 10:03:23.589: INFO: Running kubectl logs on non-ready containers in summary-test-7591 �[1mSTEP�[0m: Recording processes in system cgroups Sep 16 10:03:23.591: INFO: Processes in kubelet cgroup (/kubelet.slice): Sep 16 10:03:23.591: INFO: /tmp/node-e2e-20190916T095806/kubelet�--kubeconfig�/tmp/node-e2e-20190916T095806/kubeconfig�--root-dir�/var/lib/kubelet�--v�4�--logtostderr�--dynamic-config-dir�/tmp/node-e2e-20190916T095806/dynamic-kubelet-config�--network-plugin=kubenet�--cni-bin-dir�/tmp/node-e2e-20190916T095806/cni/bin�--cni-conf-dir�/tmp/node-e2e-20190916T095806/cni/net.d�--cni-cache-dir�/tmp/node-e2e-20190916T095806/cni/cache�--hostname-override�tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913�--container-runtime�remote�--container-runtime-endpoint�unix:///run/containerd/containerd.sock�--config�/tmp/node-e2e-20190916T095806/kubelet-config�--experimental-kernel-memcg-notification=true�--cgroups-per-qos=true�--cgroup-root=/�--runtime-cgroups=/system.slice/containerd.service� Sep 16 10:03:23.591: INFO: Skipping unconfigured cgroup misc [AfterEach] [k8s.io] Summary API [NodeConformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "summary-test-7591". �[1mSTEP�[0m: Found 6 events. Sep 16 10:03:23.592: INFO: At 2019-09-16 10:03:02 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Sep 16 10:03:23.592: INFO: At 2019-09-16 10:03:02 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Created: Created container busybox-container Sep 16 10:03:23.592: INFO: At 2019-09-16 10:03:02 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Started: Started container busybox-container Sep 16 10:03:23.592: INFO: At 2019-09-16 10:03:02 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Sep 16 10:03:23.592: INFO: At 2019-09-16 10:03:02 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Created: Created container busybox-container Sep 16 10:03:23.592: INFO: At 2019-09-16 10:03:02 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913} Started: Started container busybox-container Sep 16 10:03:23.594: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:03:23.594: INFO: stats-busybox-0 tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:03:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:03:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:03:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:03:01 +0000 UTC }] Sep 16 10:03:23.594: INFO: stats-busybox-1 tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:03:01 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:03:05 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:03:05 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:03:01 +0000 UTC }] Sep 16 10:03:23.594: INFO: Sep 16 10:03:23.596: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:03:23.597: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 /api/v1/nodes/tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 e06930a8-592f-4099-a9d5-edfc75a6bb47 1211 0 2019-09-16 10:00:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872014336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3609870336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:03:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:03:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:03:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:03:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.84,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f51035bcd9db8910e22af6d51a902fac,SystemUUID:F51035BC-D9DB-8910-E22A-F6D51A902FAC,BootID:9e56941a-9d7d-4215-b61f-bae77a3d8412,KernelVersion:4.15.0-1042-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.2.7,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:03:23.597: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:03:23.598: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:03:23.601: INFO: test-pod started at 2019-09-16 10:03:03 +0000 UTC (0+3 container statuses recorded) Sep 16 10:03:23.601: INFO: Container busybox-1 ready: true, restart count 0 Sep 16 10:03:23.601: INFO: Container busybox-2 ready: true, restart count 0 Sep 16 10:03:23.601: INFO: Container busybox-3 ready: true, restart count 0 Sep 16 10:03:23.601: INFO: stats-busybox-0 started at 2019-09-16 10:03:01 +0000 UTC (0+1 container statuses recorded) Sep 16 10:03:23.601: INFO: Container busybox-container ready: true, restart count 1 Sep 16 10:03:23.601: INFO: stats-busybox-1 started at 2019-09-16 10:03:01 +0000 UTC (0+1 container statuses recorded) Sep 16 10:03:23.601: INFO: Container busybox-container ready: true, restart count 1 Sep 16 10:03:23.601: INFO: pod-exec-websocket-09b7e885-2eb4-4a52-879f-819681763d0a started at 2019-09-16 10:02:48 +0000 UTC (0+1 container statuses recorded) Sep 16 10:03:23.601: INFO: Container main ready: true, restart count 0 Sep 16 10:03:23.601: INFO: pod-projected-configmaps-f6f3370e-c8d6-4012-a6f5-65430618093a started at 2019-09-16 10:03:01 +0000 UTC (0+3 container statuses recorded) Sep 16 10:03:23.601: INFO: Container createcm-volume-test ready: true, restart count 0 Sep 16 10:03:23.601: INFO: Container delcm-volume-test ready: true, restart count 0 Sep 16 10:03:23.601: INFO: Container updcm-volume-test ready: true, restart count 0 Sep 16 10:03:23.601: INFO: test-host-network-pod started at 2019-09-16 10:03:07 +0000 UTC (0+2 container statuses recorded) Sep 16 10:03:23.601: INFO: Container busybox-1 ready: true, restart count 0 Sep 16 10:03:23.601: INFO: Container busybox-2 ready: true, restart count 0 W0916 10:03:23.603004 2882 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:03:23.718: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:03:23.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "summary-test-7591" for this suite. Sep 16 10:04:11.726: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:04:11.766: INFO: namespace summary-test-7591 deletion completed in 48.046894424s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[k8s\.io\]\sSummary\sAPI\s\[NodeConformance\]\swhen\squerying\s\/stats\/summary\sshould\sreport\sresource\susage\sthrough\sthe\sstats\sapi$'
_output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:53 Unexpected number of node objects for node e2e. Expects only one node. Expected <int>: 0 to equal <int>: 1 /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1332from junit_cos-stable_05.xml
[BeforeEach] [k8s.io] Summary API [NodeConformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename summary-test Sep 16 10:01:47.898: INFO: Skipping waiting for service account [It] should report resource usage through the stats api _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:53 �[1mSTEP�[0m: Creating test pods Sep 16 10:02:12.031: INFO: Unexpected unequal occurred: 0 and 1 goroutine 229 [running]: runtime/debug.Stack(0x4, 0x4dfde69, 0x2) /usr/local/go/src/runtime/debug/stack.go:24 +0x9d runtime/debug.PrintStack() /usr/local/go/src/runtime/debug/stack.go:16 +0x22 k8s.io/kubernetes/test/e2e/framework.ExpectEqual(0x4235c80, 0xbea4d00, 0x4235c80, 0x845da80, 0xc000cd5b60, 0x1, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/util.go:1330 +0x27f k8s.io/kubernetes/test/e2e_node.getLocalNode(0xc0006b7040, 0x85ac280) _output/local/go/src/k8s.io/kubernetes/test/e2e_node/util.go:350 +0xd2 k8s.io/kubernetes/test/e2e_node.glob..func43.1.2() _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:81 +0x28c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).runSync(0xc000de5140, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:113 +0x9c k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*runner).run(0xc000de5140, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/runner.go:64 +0xcf k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes.(*ItNode).Run(0xc000584f80, 0x8543d80, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/leafnodes/it_node.go:26 +0x64 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).runSample(0xc00101fef0, 0x0, 0x8543d80, 0xc0001ed4c0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:215 +0x596 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec.(*Spec).Run(0xc00101fef0, 0x8543d80, 0xc0001ed4c0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/spec/spec.go:138 +0xf4 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpec(0xc000370500, 0xc00101fef0, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:200 +0x10f k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).runSpecs(0xc000370500, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:170 +0x124 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner.(*SpecRunner).Run(0xc000370500, 0xc00063b7c0) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/specrunner/spec_runner.go:66 +0x117 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite.(*Suite).Run(0xc0002171d0, 0x7febbb27e520, 0xc000d38e00, 0x4e143c7, 0xd, 0xc0006434c0, 0x2, 0x2, 0x8609300, 0xc0001ed4c0, ...) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/internal/suite/suite.go:62 +0x42e k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithCustomReporters(0x85492c0, 0xc000d38e00, 0x4e143c7, 0xd, 0xc000643480, 0x2, 0x2, 0x2) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:221 +0x217 k8s.io/kubernetes/vendor/github.com/onsi/ginkgo.RunSpecsWithDefaultAndCustomReporters(0x85492c0, 0xc000d38e00, 0x4e143c7, 0xd, 0xc0005ea400, 0x1, 0x1, 0x1) /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/github.com/onsi/ginkgo/ginkgo_dsl.go:209 +0xad k8s.io/kubernetes/test/e2e_node.TestE2eNode(0xc000d38e00) _output/local/go/src/k8s.io/kubernetes/test/e2e_node/e2e_node_suite_test.go:148 +0x3db testing.tRunner(0xc000d38e00, 0x4fe3568) /usr/local/go/src/testing/testing.go:865 +0xc0 created by testing.(*T).Run /usr/local/go/src/testing/testing.go:916 +0x35a [AfterEach] when querying /stats/summary _output/local/go/src/k8s.io/kubernetes/test/e2e_node/summary_test.go:43 Sep 16 10:02:12.035: INFO: Running kubectl logs on non-ready containers in summary-test-7793 �[1mSTEP�[0m: Recording processes in system cgroups Sep 16 10:02:12.039: INFO: Processes in kubelet cgroup (/kubelet.slice): Sep 16 10:02:12.039: INFO: /tmp/node-e2e-20190916T095806/kubelet�--kubeconfig�/tmp/node-e2e-20190916T095806/kubeconfig�--root-dir�/var/lib/kubelet�--v�4�--logtostderr�--dynamic-config-dir�/tmp/node-e2e-20190916T095806/dynamic-kubelet-config�--network-plugin=kubenet�--cni-bin-dir�/tmp/node-e2e-20190916T095806/cni/bin�--cni-conf-dir�/tmp/node-e2e-20190916T095806/cni/net.d�--cni-cache-dir�/tmp/node-e2e-20190916T095806/cni/cache�--hostname-override�tmp-node-e2e-d8aaa33e-cos-73-11647-293-0�--container-runtime�remote�--container-runtime-endpoint�unix:///run/containerd/containerd.sock�--config�/tmp/node-e2e-20190916T095806/kubelet-config�--experimental-mounter-path=/tmp/node-e2e-20190916T095806/mounter�--experimental-kernel-memcg-notification=true�--cgroups-per-qos=true�--cgroup-root=/�--runtime-cgroups=/system.slice/containerd.service� Sep 16 10:02:12.039: INFO: Skipping unconfigured cgroup misc [AfterEach] [k8s.io] Summary API [NodeConformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "summary-test-7793". �[1mSTEP�[0m: Found 6 events. Sep 16 10:02:12.041: INFO: At 2019-09-16 10:01:49 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Sep 16 10:02:12.041: INFO: At 2019-09-16 10:01:49 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Created: Created container busybox-container Sep 16 10:02:12.041: INFO: At 2019-09-16 10:01:49 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Pulled: Container image "docker.io/library/busybox:1.29" already present on machine Sep 16 10:02:12.041: INFO: At 2019-09-16 10:01:50 +0000 UTC - event for stats-busybox-0: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Started: Started container busybox-container Sep 16 10:02:12.041: INFO: At 2019-09-16 10:01:50 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Created: Created container busybox-container Sep 16 10:02:12.041: INFO: At 2019-09-16 10:01:51 +0000 UTC - event for stats-busybox-1: {kubelet tmp-node-e2e-d8aaa33e-cos-73-11647-293-0} Started: Started container busybox-container Sep 16 10:02:12.044: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:02:12.044: INFO: stats-busybox-0 tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:47 +0000 UTC }] Sep 16 10:02:12.044: INFO: stats-busybox-1 tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:47 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-09-16 10:01:47 +0000 UTC }] Sep 16 10:02:12.044: INFO: Sep 16 10:02:12.050: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:02:12.051: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 /api/v1/nodes/tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 de1de0c3-ce1d-44d6-bb56-2e7839e24f20 801 0 2019-09-16 10:00:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878486016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616342016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:01:38 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:01:38 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:01:38 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:01:38 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.83,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a064d78421e9c8eac5e80fe5da19fb15,SystemUUID:A064D784-21E9-C8EA-C5E8-0FE5DA19FB15,BootID:821ee2ea-1060-424d-9835-b7cdc9159dc8,KernelVersion:4.14.138+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.2.8,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:02:12.051: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:02:12.054: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:02:12.058: INFO: liveness-705bfbae-aa80-4713-a3d3-683227d78372 started at 2019-09-16 10:00:49 +0000 UTC (0+1 container statuses recorded) Sep 16 10:02:12.058: INFO: Container liveness ready: false, restart count 3 Sep 16 10:02:12.058: INFO: stats-busybox-1 started at 2019-09-16 10:01:47 +0000 UTC (0+1 container statuses recorded) Sep 16 10:02:12.058: INFO: Container busybox-container ready: true, restart count 1 Sep 16 10:02:12.058: INFO: stats-busybox-0 started at 2019-09-16 10:01:47 +0000 UTC (0+1 container statuses recorded) Sep 16 10:02:12.058: INFO: Container busybox-container ready: true, restart count 1 Sep 16 10:02:12.058: INFO: liveness-34c7eb9a-9b96-427f-aa82-74e3319c13ab started at 2019-09-16 10:02:03 +0000 UTC (0+1 container statuses recorded) Sep 16 10:02:12.058: INFO: Container liveness ready: true, restart count 0 Sep 16 10:02:12.058: INFO: alpine-nnp-true-0056e9bf-b47b-4098-b4cf-410491876209 started at 2019-09-16 10:02:07 +0000 UTC (0+1 container statuses recorded) Sep 16 10:02:12.058: INFO: Container alpine-nnp-true-0056e9bf-b47b-4098-b4cf-410491876209 ready: false, restart count 0 W0916 10:02:12.061817 995 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:02:12.189: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:02:12.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "summary-test-7793" for this suite. Sep 16 10:03:04.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:03:04.266: INFO: namespace summary-test-7793 deletion completed in 52.07540713s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\shttp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc000218d80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_cos-stable_05.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:11:06.439: INFO: Skipping waiting for service account [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-5254 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:11:06.439: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:11:06.440: INFO: Unschedulable nodes: Sep 16 10:11:06.440: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:06.440: INFO: ================================ Sep 16 10:11:36.441: INFO: Unschedulable nodes: Sep 16 10:11:36.442: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:36.442: INFO: ================================ Sep 16 10:12:06.441: INFO: Unschedulable nodes: Sep 16 10:12:06.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:06.441: INFO: ================================ Sep 16 10:12:36.441: INFO: Unschedulable nodes: Sep 16 10:12:36.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:36.441: INFO: ================================ Sep 16 10:13:06.441: INFO: Unschedulable nodes: Sep 16 10:13:06.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:06.441: INFO: ================================ Sep 16 10:13:36.441: INFO: Unschedulable nodes: Sep 16 10:13:36.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:36.441: INFO: ================================ Sep 16 10:14:06.441: INFO: Unschedulable nodes: Sep 16 10:14:06.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:06.441: INFO: ================================ Sep 16 10:14:36.441: INFO: Unschedulable nodes: Sep 16 10:14:36.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:36.441: INFO: ================================ Sep 16 10:15:06.441: INFO: Unschedulable nodes: Sep 16 10:15:06.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:06.441: INFO: ================================ Sep 16 10:15:36.441: INFO: Unschedulable nodes: Sep 16 10:15:36.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:36.442: INFO: ================================ Sep 16 10:16:06.441: INFO: Unschedulable nodes: Sep 16 10:16:06.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:06.441: INFO: ================================ Sep 16 10:16:36.441: INFO: Unschedulable nodes: Sep 16 10:16:36.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:36.441: INFO: ================================ Sep 16 10:17:06.441: INFO: Unschedulable nodes: Sep 16 10:17:06.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:06.441: INFO: ================================ Sep 16 10:17:36.441: INFO: Unschedulable nodes: Sep 16 10:17:36.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:36.441: INFO: ================================ Sep 16 10:18:06.441: INFO: Unschedulable nodes: Sep 16 10:18:06.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:06.441: INFO: ================================ Sep 16 10:18:36.441: INFO: Unschedulable nodes: Sep 16 10:18:36.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:36.441: INFO: ================================ Sep 16 10:19:06.441: INFO: Unschedulable nodes: Sep 16 10:19:06.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:06.441: INFO: ================================ Sep 16 10:19:36.441: INFO: Unschedulable nodes: Sep 16 10:19:36.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:36.441: INFO: ================================ Sep 16 10:20:06.441: INFO: Unschedulable nodes: Sep 16 10:20:06.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:06.441: INFO: ================================ Sep 16 10:20:36.441: INFO: Unschedulable nodes: Sep 16 10:20:36.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:36.441: INFO: ================================ Sep 16 10:21:06.441: INFO: Unschedulable nodes: Sep 16 10:21:06.441: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:06.441: INFO: ================================ Sep 16 10:21:06.442: INFO: Unschedulable nodes: Sep 16 10:21:06.442: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:06.442: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-5254". �[1mSTEP�[0m: Found 0 events. Sep 16 10:21:06.445: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:21:06.445: INFO: Sep 16 10:21:06.448: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:21:06.449: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 /api/v1/nodes/tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 de1de0c3-ce1d-44d6-bb56-2e7839e24f20 3509 0 2019-09-16 10:00:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878486016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616342016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:20:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.83,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a064d78421e9c8eac5e80fe5da19fb15,SystemUUID:A064D784-21E9-C8EA-C5E8-0FE5DA19FB15,BootID:821ee2ea-1060-424d-9835-b7cdc9159dc8,KernelVersion:4.14.138+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.2.8,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:21:06.449: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:21:06.450: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 W0916 10:21:06.454251 995 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:21:06.472: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:21:06.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-5254" for this suite. Sep 16 10:21:12.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:21:12.518: INFO: namespace pod-network-test-5254 deletion completed in 6.044121966s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\shttp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc000218d80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_cos-stable_05.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:21:12.522: INFO: Skipping waiting for service account [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-8164 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:21:12.522: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:21:12.524: INFO: Unschedulable nodes: Sep 16 10:21:12.524: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:12.524: INFO: ================================ Sep 16 10:21:42.525: INFO: Unschedulable nodes: Sep 16 10:21:42.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:42.525: INFO: ================================ Sep 16 10:22:12.525: INFO: Unschedulable nodes: Sep 16 10:22:12.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:22:12.525: INFO: ================================ Sep 16 10:22:42.525: INFO: Unschedulable nodes: Sep 16 10:22:42.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:22:42.525: INFO: ================================ Sep 16 10:23:12.525: INFO: Unschedulable nodes: Sep 16 10:23:12.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:23:12.525: INFO: ================================ Sep 16 10:23:42.525: INFO: Unschedulable nodes: Sep 16 10:23:42.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:23:42.525: INFO: ================================ Sep 16 10:24:12.526: INFO: Unschedulable nodes: Sep 16 10:24:12.526: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:24:12.526: INFO: ================================ Sep 16 10:24:42.525: INFO: Unschedulable nodes: Sep 16 10:24:42.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:24:42.525: INFO: ================================ Sep 16 10:25:12.525: INFO: Unschedulable nodes: Sep 16 10:25:12.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:25:12.525: INFO: ================================ Sep 16 10:25:42.525: INFO: Unschedulable nodes: Sep 16 10:25:42.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:25:42.525: INFO: ================================ Sep 16 10:26:12.525: INFO: Unschedulable nodes: Sep 16 10:26:12.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:26:12.525: INFO: ================================ Sep 16 10:26:42.525: INFO: Unschedulable nodes: Sep 16 10:26:42.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:26:42.525: INFO: ================================ Sep 16 10:27:12.525: INFO: Unschedulable nodes: Sep 16 10:27:12.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:27:12.525: INFO: ================================ Sep 16 10:27:42.525: INFO: Unschedulable nodes: Sep 16 10:27:42.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:27:42.525: INFO: ================================ Sep 16 10:28:12.525: INFO: Unschedulable nodes: Sep 16 10:28:12.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:28:12.525: INFO: ================================ Sep 16 10:28:42.525: INFO: Unschedulable nodes: Sep 16 10:28:42.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:28:42.525: INFO: ================================ Sep 16 10:29:12.525: INFO: Unschedulable nodes: Sep 16 10:29:12.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:29:12.525: INFO: ================================ Sep 16 10:29:42.525: INFO: Unschedulable nodes: Sep 16 10:29:42.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:29:42.525: INFO: ================================ Sep 16 10:30:12.525: INFO: Unschedulable nodes: Sep 16 10:30:12.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:30:12.525: INFO: ================================ Sep 16 10:30:42.525: INFO: Unschedulable nodes: Sep 16 10:30:42.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:30:42.525: INFO: ================================ Sep 16 10:31:12.525: INFO: Unschedulable nodes: Sep 16 10:31:12.525: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:31:12.525: INFO: ================================ Sep 16 10:31:12.526: INFO: Unschedulable nodes: Sep 16 10:31:12.526: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:31:12.526: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-8164". �[1mSTEP�[0m: Found 0 events. Sep 16 10:31:12.529: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:31:12.529: INFO: Sep 16 10:31:12.531: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:31:12.533: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 /api/v1/nodes/tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 de1de0c3-ce1d-44d6-bb56-2e7839e24f20 3665 0 2019-09-16 10:00:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878486016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616342016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:31:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:31:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:31:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:31:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.83,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a064d78421e9c8eac5e80fe5da19fb15,SystemUUID:A064D784-21E9-C8EA-C5E8-0FE5DA19FB15,BootID:821ee2ea-1060-424d-9835-b7cdc9159dc8,KernelVersion:4.14.138+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.2.8,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:31:12.533: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:31:12.534: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 W0916 10:31:12.543219 995 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:31:12.563: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:31:12.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-8164" for this suite. Sep 16 10:31:18.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:31:18.608: INFO: namespace pod-network-test-8164 deletion completed in 6.043794183s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\shttp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc00021ad80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_ubuntu_06.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:10:37.908: INFO: Skipping waiting for service account [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-3730 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:10:37.908: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:10:37.910: INFO: Unschedulable nodes: Sep 16 10:10:37.910: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:37.910: INFO: ================================ Sep 16 10:11:07.912: INFO: Unschedulable nodes: Sep 16 10:11:07.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:07.912: INFO: ================================ Sep 16 10:11:37.923: INFO: Unschedulable nodes: Sep 16 10:11:37.923: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:37.923: INFO: ================================ Sep 16 10:12:07.912: INFO: Unschedulable nodes: Sep 16 10:12:07.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:07.912: INFO: ================================ Sep 16 10:12:37.912: INFO: Unschedulable nodes: Sep 16 10:12:37.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:37.912: INFO: ================================ Sep 16 10:13:07.912: INFO: Unschedulable nodes: Sep 16 10:13:07.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:07.912: INFO: ================================ Sep 16 10:13:37.912: INFO: Unschedulable nodes: Sep 16 10:13:37.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:37.912: INFO: ================================ Sep 16 10:14:07.912: INFO: Unschedulable nodes: Sep 16 10:14:07.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:07.912: INFO: ================================ Sep 16 10:14:37.912: INFO: Unschedulable nodes: Sep 16 10:14:37.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:37.912: INFO: ================================ Sep 16 10:15:07.912: INFO: Unschedulable nodes: Sep 16 10:15:07.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:07.912: INFO: ================================ Sep 16 10:15:37.912: INFO: Unschedulable nodes: Sep 16 10:15:37.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:37.912: INFO: ================================ Sep 16 10:16:07.912: INFO: Unschedulable nodes: Sep 16 10:16:07.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:07.912: INFO: ================================ Sep 16 10:16:37.912: INFO: Unschedulable nodes: Sep 16 10:16:37.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:37.912: INFO: ================================ Sep 16 10:17:07.912: INFO: Unschedulable nodes: Sep 16 10:17:07.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:07.912: INFO: ================================ Sep 16 10:17:37.912: INFO: Unschedulable nodes: Sep 16 10:17:37.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:37.912: INFO: ================================ Sep 16 10:18:07.912: INFO: Unschedulable nodes: Sep 16 10:18:07.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:07.912: INFO: ================================ Sep 16 10:18:37.912: INFO: Unschedulable nodes: Sep 16 10:18:37.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:37.912: INFO: ================================ Sep 16 10:19:07.912: INFO: Unschedulable nodes: Sep 16 10:19:07.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:07.912: INFO: ================================ Sep 16 10:19:37.912: INFO: Unschedulable nodes: Sep 16 10:19:37.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:37.912: INFO: ================================ Sep 16 10:20:07.912: INFO: Unschedulable nodes: Sep 16 10:20:07.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:07.912: INFO: ================================ Sep 16 10:20:37.912: INFO: Unschedulable nodes: Sep 16 10:20:37.912: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:37.912: INFO: ================================ Sep 16 10:20:37.914: INFO: Unschedulable nodes: Sep 16 10:20:37.914: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:37.914: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-3730". �[1mSTEP�[0m: Found 0 events. Sep 16 10:20:37.916: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:20:37.916: INFO: Sep 16 10:20:37.919: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:20:37.920: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 /api/v1/nodes/tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 e06930a8-592f-4099-a9d5-edfc75a6bb47 3517 0 2019-09-16 10:00:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872014336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3609870336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.84,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f51035bcd9db8910e22af6d51a902fac,SystemUUID:F51035BC-D9DB-8910-E22A-F6D51A902FAC,BootID:9e56941a-9d7d-4215-b61f-bae77a3d8412,KernelVersion:4.15.0-1042-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.2.7,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:20:37.921: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:20:37.922: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 W0916 10:20:37.931398 2882 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:20:37.951: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:20:37.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-3730" for this suite. Sep 16 10:20:43.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:20:43.997: INFO: namespace pod-network-test-3730 deletion completed in 6.044464067s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\shttp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc00021ad80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_ubuntu_06.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:20:44.002: INFO: Skipping waiting for service account [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-4541 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:20:44.002: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:20:44.003: INFO: Unschedulable nodes: Sep 16 10:20:44.003: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:44.003: INFO: ================================ Sep 16 10:21:14.005: INFO: Unschedulable nodes: Sep 16 10:21:14.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:14.005: INFO: ================================ Sep 16 10:21:44.005: INFO: Unschedulable nodes: Sep 16 10:21:44.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:44.005: INFO: ================================ Sep 16 10:22:14.005: INFO: Unschedulable nodes: Sep 16 10:22:14.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:22:14.005: INFO: ================================ Sep 16 10:22:44.005: INFO: Unschedulable nodes: Sep 16 10:22:44.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:22:44.005: INFO: ================================ Sep 16 10:23:14.005: INFO: Unschedulable nodes: Sep 16 10:23:14.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:23:14.005: INFO: ================================ Sep 16 10:23:44.005: INFO: Unschedulable nodes: Sep 16 10:23:44.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:23:44.005: INFO: ================================ Sep 16 10:24:14.005: INFO: Unschedulable nodes: Sep 16 10:24:14.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:24:14.005: INFO: ================================ Sep 16 10:24:44.005: INFO: Unschedulable nodes: Sep 16 10:24:44.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:24:44.005: INFO: ================================ Sep 16 10:25:14.005: INFO: Unschedulable nodes: Sep 16 10:25:14.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:25:14.005: INFO: ================================ Sep 16 10:25:44.005: INFO: Unschedulable nodes: Sep 16 10:25:44.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:25:44.005: INFO: ================================ Sep 16 10:26:14.005: INFO: Unschedulable nodes: Sep 16 10:26:14.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:26:14.005: INFO: ================================ Sep 16 10:26:44.005: INFO: Unschedulable nodes: Sep 16 10:26:44.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:26:44.005: INFO: ================================ Sep 16 10:27:14.005: INFO: Unschedulable nodes: Sep 16 10:27:14.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:27:14.005: INFO: ================================ Sep 16 10:27:44.005: INFO: Unschedulable nodes: Sep 16 10:27:44.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:27:44.005: INFO: ================================ Sep 16 10:28:14.005: INFO: Unschedulable nodes: Sep 16 10:28:14.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:28:14.005: INFO: ================================ Sep 16 10:28:44.005: INFO: Unschedulable nodes: Sep 16 10:28:44.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:28:44.005: INFO: ================================ Sep 16 10:29:14.005: INFO: Unschedulable nodes: Sep 16 10:29:14.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:29:14.005: INFO: ================================ Sep 16 10:29:44.005: INFO: Unschedulable nodes: Sep 16 10:29:44.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:29:44.005: INFO: ================================ Sep 16 10:30:14.005: INFO: Unschedulable nodes: Sep 16 10:30:14.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:30:14.005: INFO: ================================ Sep 16 10:30:44.005: INFO: Unschedulable nodes: Sep 16 10:30:44.005: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:30:44.005: INFO: ================================ Sep 16 10:30:44.006: INFO: Unschedulable nodes: Sep 16 10:30:44.006: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:30:44.006: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-4541". �[1mSTEP�[0m: Found 0 events. Sep 16 10:30:44.009: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:30:44.009: INFO: Sep 16 10:30:44.011: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:30:44.012: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 /api/v1/nodes/tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 e06930a8-592f-4099-a9d5-edfc75a6bb47 3660 0 2019-09-16 10:00:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872014336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3609870336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:30:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:30:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:30:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:30:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.84,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f51035bcd9db8910e22af6d51a902fac,SystemUUID:F51035BC-D9DB-8910-E22A-F6D51A902FAC,BootID:9e56941a-9d7d-4215-b61f-bae77a3d8412,KernelVersion:4.15.0-1042-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.2.7,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:30:44.013: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:30:44.014: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 W0916 10:30:44.025004 2882 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:30:44.048: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:30:44.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-4541" for this suite. Sep 16 10:30:50.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:30:50.100: INFO: namespace pod-network-test-4541 deletion completed in 6.047644715s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\sudp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc00021ad80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_ubuntu_05.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:11:06.617: INFO: Skipping waiting for service account [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-4190 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:11:06.617: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:11:06.618: INFO: Unschedulable nodes: Sep 16 10:11:06.618: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:06.618: INFO: ================================ Sep 16 10:11:36.620: INFO: Unschedulable nodes: Sep 16 10:11:36.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:36.620: INFO: ================================ Sep 16 10:12:06.620: INFO: Unschedulable nodes: Sep 16 10:12:06.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:06.620: INFO: ================================ Sep 16 10:12:36.620: INFO: Unschedulable nodes: Sep 16 10:12:36.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:36.620: INFO: ================================ Sep 16 10:13:06.620: INFO: Unschedulable nodes: Sep 16 10:13:06.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:06.620: INFO: ================================ Sep 16 10:13:36.620: INFO: Unschedulable nodes: Sep 16 10:13:36.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:36.620: INFO: ================================ Sep 16 10:14:06.620: INFO: Unschedulable nodes: Sep 16 10:14:06.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:06.620: INFO: ================================ Sep 16 10:14:36.620: INFO: Unschedulable nodes: Sep 16 10:14:36.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:36.620: INFO: ================================ Sep 16 10:15:06.620: INFO: Unschedulable nodes: Sep 16 10:15:06.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:06.620: INFO: ================================ Sep 16 10:15:36.620: INFO: Unschedulable nodes: Sep 16 10:15:36.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:36.620: INFO: ================================ Sep 16 10:16:06.620: INFO: Unschedulable nodes: Sep 16 10:16:06.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:06.620: INFO: ================================ Sep 16 10:16:36.620: INFO: Unschedulable nodes: Sep 16 10:16:36.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:36.620: INFO: ================================ Sep 16 10:17:06.620: INFO: Unschedulable nodes: Sep 16 10:17:06.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:06.620: INFO: ================================ Sep 16 10:17:36.620: INFO: Unschedulable nodes: Sep 16 10:17:36.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:36.620: INFO: ================================ Sep 16 10:18:06.620: INFO: Unschedulable nodes: Sep 16 10:18:06.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:06.620: INFO: ================================ Sep 16 10:18:36.620: INFO: Unschedulable nodes: Sep 16 10:18:36.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:36.620: INFO: ================================ Sep 16 10:19:06.620: INFO: Unschedulable nodes: Sep 16 10:19:06.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:06.620: INFO: ================================ Sep 16 10:19:36.620: INFO: Unschedulable nodes: Sep 16 10:19:36.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:36.620: INFO: ================================ Sep 16 10:20:06.620: INFO: Unschedulable nodes: Sep 16 10:20:06.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:06.620: INFO: ================================ Sep 16 10:20:36.620: INFO: Unschedulable nodes: Sep 16 10:20:36.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:36.620: INFO: ================================ Sep 16 10:21:06.620: INFO: Unschedulable nodes: Sep 16 10:21:06.620: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:06.620: INFO: ================================ Sep 16 10:21:06.621: INFO: Unschedulable nodes: Sep 16 10:21:06.621: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:06.621: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-4190". �[1mSTEP�[0m: Found 0 events. Sep 16 10:21:06.623: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:21:06.623: INFO: Sep 16 10:21:06.626: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:21:06.627: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 /api/v1/nodes/tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 e06930a8-592f-4099-a9d5-edfc75a6bb47 3517 0 2019-09-16 10:00:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872014336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3609870336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.84,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f51035bcd9db8910e22af6d51a902fac,SystemUUID:F51035BC-D9DB-8910-E22A-F6D51A902FAC,BootID:9e56941a-9d7d-4215-b61f-bae77a3d8412,KernelVersion:4.15.0-1042-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.2.7,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:21:06.627: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:21:06.628: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 W0916 10:21:06.632147 2879 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:21:06.651: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:21:06.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-4190" for this suite. Sep 16 10:21:12.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:21:12.696: INFO: namespace pod-network-test-4190 deletion completed in 6.043476004s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\sudp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc000218d80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_cos-stable_03.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:12:47.578: INFO: Skipping waiting for service account [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-3067 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:12:47.578: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:12:47.580: INFO: Unschedulable nodes: Sep 16 10:12:47.580: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:47.580: INFO: ================================ Sep 16 10:13:17.581: INFO: Unschedulable nodes: Sep 16 10:13:17.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:17.581: INFO: ================================ Sep 16 10:13:47.581: INFO: Unschedulable nodes: Sep 16 10:13:47.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:47.581: INFO: ================================ Sep 16 10:14:17.581: INFO: Unschedulable nodes: Sep 16 10:14:17.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:17.581: INFO: ================================ Sep 16 10:14:47.581: INFO: Unschedulable nodes: Sep 16 10:14:47.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:47.581: INFO: ================================ Sep 16 10:15:17.581: INFO: Unschedulable nodes: Sep 16 10:15:17.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:17.581: INFO: ================================ Sep 16 10:15:47.581: INFO: Unschedulable nodes: Sep 16 10:15:47.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:47.581: INFO: ================================ Sep 16 10:16:17.581: INFO: Unschedulable nodes: Sep 16 10:16:17.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:17.581: INFO: ================================ Sep 16 10:16:47.581: INFO: Unschedulable nodes: Sep 16 10:16:47.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:47.581: INFO: ================================ Sep 16 10:17:17.581: INFO: Unschedulable nodes: Sep 16 10:17:17.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:17.581: INFO: ================================ Sep 16 10:17:47.581: INFO: Unschedulable nodes: Sep 16 10:17:47.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:47.581: INFO: ================================ Sep 16 10:18:17.581: INFO: Unschedulable nodes: Sep 16 10:18:17.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:17.581: INFO: ================================ Sep 16 10:18:47.581: INFO: Unschedulable nodes: Sep 16 10:18:47.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:47.581: INFO: ================================ Sep 16 10:19:17.581: INFO: Unschedulable nodes: Sep 16 10:19:17.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:17.581: INFO: ================================ Sep 16 10:19:47.581: INFO: Unschedulable nodes: Sep 16 10:19:47.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:47.581: INFO: ================================ Sep 16 10:20:17.581: INFO: Unschedulable nodes: Sep 16 10:20:17.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:17.581: INFO: ================================ Sep 16 10:20:47.582: INFO: Unschedulable nodes: Sep 16 10:20:47.582: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:47.582: INFO: ================================ Sep 16 10:21:17.581: INFO: Unschedulable nodes: Sep 16 10:21:17.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:17.581: INFO: ================================ Sep 16 10:21:47.581: INFO: Unschedulable nodes: Sep 16 10:21:47.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:47.581: INFO: ================================ Sep 16 10:22:17.581: INFO: Unschedulable nodes: Sep 16 10:22:17.581: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:22:17.581: INFO: ================================ Sep 16 10:22:47.584: INFO: Unschedulable nodes: Sep 16 10:22:47.584: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:22:47.584: INFO: ================================ Sep 16 10:22:47.585: INFO: Unschedulable nodes: Sep 16 10:22:47.585: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:22:47.585: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-3067". �[1mSTEP�[0m: Found 0 events. Sep 16 10:22:47.595: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:22:47.596: INFO: Sep 16 10:22:47.601: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:22:47.603: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 /api/v1/nodes/tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 de1de0c3-ce1d-44d6-bb56-2e7839e24f20 3545 0 2019-09-16 10:00:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878486016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616342016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:22:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:22:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:22:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:22:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.83,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a064d78421e9c8eac5e80fe5da19fb15,SystemUUID:A064D784-21E9-C8EA-C5E8-0FE5DA19FB15,BootID:821ee2ea-1060-424d-9835-b7cdc9159dc8,KernelVersion:4.14.138+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.2.8,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:22:47.603: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:22:47.604: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 W0916 10:22:47.608155 993 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:22:47.627: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:22:47.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-3067" for this suite. Sep 16 10:22:53.633: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:22:53.676: INFO: namespace pod-network-test-3067 deletion completed in 6.047550099s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\sudp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc000218d80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_cos-stable_03.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:02:41.467: INFO: Skipping waiting for service account [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-6581 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:02:41.467: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:02:41.471: INFO: Unschedulable nodes: Sep 16 10:02:41.471: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:02:41.471: INFO: ================================ Sep 16 10:03:11.473: INFO: Unschedulable nodes: Sep 16 10:03:11.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:11.473: INFO: ================================ Sep 16 10:03:41.473: INFO: Unschedulable nodes: Sep 16 10:03:41.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:41.473: INFO: ================================ Sep 16 10:04:11.473: INFO: Unschedulable nodes: Sep 16 10:04:11.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:11.473: INFO: ================================ Sep 16 10:04:41.477: INFO: Unschedulable nodes: Sep 16 10:04:41.477: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:41.477: INFO: ================================ Sep 16 10:05:11.479: INFO: Unschedulable nodes: Sep 16 10:05:11.479: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:11.479: INFO: ================================ Sep 16 10:05:41.473: INFO: Unschedulable nodes: Sep 16 10:05:41.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:41.473: INFO: ================================ Sep 16 10:06:11.473: INFO: Unschedulable nodes: Sep 16 10:06:11.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:11.473: INFO: ================================ Sep 16 10:06:41.473: INFO: Unschedulable nodes: Sep 16 10:06:41.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:41.473: INFO: ================================ Sep 16 10:07:11.473: INFO: Unschedulable nodes: Sep 16 10:07:11.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:11.473: INFO: ================================ Sep 16 10:07:41.473: INFO: Unschedulable nodes: Sep 16 10:07:41.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:41.473: INFO: ================================ Sep 16 10:08:11.473: INFO: Unschedulable nodes: Sep 16 10:08:11.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:11.473: INFO: ================================ Sep 16 10:08:41.473: INFO: Unschedulable nodes: Sep 16 10:08:41.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:41.473: INFO: ================================ Sep 16 10:09:11.473: INFO: Unschedulable nodes: Sep 16 10:09:11.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:11.473: INFO: ================================ Sep 16 10:09:41.473: INFO: Unschedulable nodes: Sep 16 10:09:41.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:41.473: INFO: ================================ Sep 16 10:10:11.473: INFO: Unschedulable nodes: Sep 16 10:10:11.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:11.473: INFO: ================================ Sep 16 10:10:41.473: INFO: Unschedulable nodes: Sep 16 10:10:41.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:41.473: INFO: ================================ Sep 16 10:11:11.473: INFO: Unschedulable nodes: Sep 16 10:11:11.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:11.473: INFO: ================================ Sep 16 10:11:41.473: INFO: Unschedulable nodes: Sep 16 10:11:41.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:41.473: INFO: ================================ Sep 16 10:12:11.473: INFO: Unschedulable nodes: Sep 16 10:12:11.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:11.473: INFO: ================================ Sep 16 10:12:41.473: INFO: Unschedulable nodes: Sep 16 10:12:41.473: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:41.473: INFO: ================================ Sep 16 10:12:41.474: INFO: Unschedulable nodes: Sep 16 10:12:41.474: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:41.474: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-6581". �[1mSTEP�[0m: Found 0 events. Sep 16 10:12:41.477: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:12:41.477: INFO: Sep 16 10:12:41.479: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:12:41.481: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 /api/v1/nodes/tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 de1de0c3-ce1d-44d6-bb56-2e7839e24f20 3362 0 2019-09-16 10:00:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878486016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616342016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:12:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:12:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:12:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:12:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.83,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a064d78421e9c8eac5e80fe5da19fb15,SystemUUID:A064D784-21E9-C8EA-C5E8-0FE5DA19FB15,BootID:821ee2ea-1060-424d-9835-b7cdc9159dc8,KernelVersion:4.14.138+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.2.8,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:12:41.482: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:12:41.483: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:12:41.485: INFO: busybox-39a611e4-31d9-4aff-b302-2e2d76ec74b8 started at 2019-09-16 10:08:44 +0000 UTC (0+1 container statuses recorded) Sep 16 10:12:41.485: INFO: Container busybox ready: true, restart count 0 Sep 16 10:12:41.485: INFO: image-pull-test83cf1c31-1e88-4e8a-a975-822cbeca2de9 started at 2019-09-16 10:08:06 +0000 UTC (0+1 container statuses recorded) Sep 16 10:12:41.485: INFO: Container image-pull-test ready: true, restart count 0 W0916 10:12:41.486940 993 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:12:41.525: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:12:41.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-6581" for this suite. Sep 16 10:12:47.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:12:47.571: INFO: namespace pod-network-test-6581 deletion completed in 6.04477419s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\sintra\-pod\scommunication\:\sudp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc00021ad80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_ubuntu_05.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:01:00.463: INFO: Skipping waiting for service account [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-4785 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:01:00.463: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:01:00.464: INFO: Unschedulable nodes: Sep 16 10:01:00.464: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:01:00.464: INFO: ================================ Sep 16 10:01:30.465: INFO: Unschedulable nodes: Sep 16 10:01:30.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:01:30.465: INFO: ================================ Sep 16 10:02:00.465: INFO: Unschedulable nodes: Sep 16 10:02:00.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:02:00.465: INFO: ================================ Sep 16 10:02:30.465: INFO: Unschedulable nodes: Sep 16 10:02:30.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:02:30.465: INFO: ================================ Sep 16 10:03:00.470: INFO: Unschedulable nodes: Sep 16 10:03:00.470: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:00.470: INFO: ================================ Sep 16 10:03:30.465: INFO: Unschedulable nodes: Sep 16 10:03:30.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:30.465: INFO: ================================ Sep 16 10:04:00.465: INFO: Unschedulable nodes: Sep 16 10:04:00.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:00.465: INFO: ================================ Sep 16 10:04:30.465: INFO: Unschedulable nodes: Sep 16 10:04:30.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:30.465: INFO: ================================ Sep 16 10:05:00.465: INFO: Unschedulable nodes: Sep 16 10:05:00.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:00.465: INFO: ================================ Sep 16 10:05:30.466: INFO: Unschedulable nodes: Sep 16 10:05:30.466: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:30.466: INFO: ================================ Sep 16 10:06:00.465: INFO: Unschedulable nodes: Sep 16 10:06:00.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:00.465: INFO: ================================ Sep 16 10:06:30.465: INFO: Unschedulable nodes: Sep 16 10:06:30.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:30.465: INFO: ================================ Sep 16 10:07:00.465: INFO: Unschedulable nodes: Sep 16 10:07:00.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:00.465: INFO: ================================ Sep 16 10:07:30.465: INFO: Unschedulable nodes: Sep 16 10:07:30.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:30.465: INFO: ================================ Sep 16 10:08:00.465: INFO: Unschedulable nodes: Sep 16 10:08:00.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:00.465: INFO: ================================ Sep 16 10:08:30.465: INFO: Unschedulable nodes: Sep 16 10:08:30.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:30.465: INFO: ================================ Sep 16 10:09:00.465: INFO: Unschedulable nodes: Sep 16 10:09:00.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:00.465: INFO: ================================ Sep 16 10:09:30.465: INFO: Unschedulable nodes: Sep 16 10:09:30.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:30.465: INFO: ================================ Sep 16 10:10:00.465: INFO: Unschedulable nodes: Sep 16 10:10:00.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:00.465: INFO: ================================ Sep 16 10:10:30.465: INFO: Unschedulable nodes: Sep 16 10:10:30.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:30.465: INFO: ================================ Sep 16 10:11:00.465: INFO: Unschedulable nodes: Sep 16 10:11:00.465: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:00.465: INFO: ================================ Sep 16 10:11:00.466: INFO: Unschedulable nodes: Sep 16 10:11:00.466: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:00.466: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-4785". �[1mSTEP�[0m: Found 0 events. Sep 16 10:11:00.470: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:11:00.470: INFO: Sep 16 10:11:00.472: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:11:00.475: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 /api/v1/nodes/tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 e06930a8-592f-4099-a9d5-edfc75a6bb47 2982 0 2019-09-16 10:00:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872014336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3609870336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:10:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:10:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:10:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:10:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.84,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f51035bcd9db8910e22af6d51a902fac,SystemUUID:F51035BC-D9DB-8910-E22A-F6D51A902FAC,BootID:9e56941a-9d7d-4215-b61f-bae77a3d8412,KernelVersion:4.15.0-1042-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.2.7,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:11:00.475: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:11:00.476: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:11:00.479: INFO: image-pull-test96232797-8565-421d-9122-597ce7ee0312 started at 2019-09-16 10:06:22 +0000 UTC (0+1 container statuses recorded) Sep 16 10:11:00.479: INFO: Container image-pull-test ready: false, restart count 0 Sep 16 10:11:00.479: INFO: busybox-readonly-fs36e0cd78-b80c-4dc8-9520-ebb5cb00df70 started at 2019-09-16 10:10:50 +0000 UTC (0+1 container statuses recorded) Sep 16 10:11:00.479: INFO: Container busybox-readonly-fs36e0cd78-b80c-4dc8-9520-ebb5cb00df70 ready: true, restart count 0 Sep 16 10:11:00.479: INFO: pod-init-f5020ffe-0f55-40bd-8dd6-f8c39a8715af started at 2019-09-16 10:10:37 +0000 UTC (2+1 container statuses recorded) Sep 16 10:11:00.479: INFO: Init container init1 ready: false, restart count 2 Sep 16 10:11:00.479: INFO: Init container init2 ready: false, restart count 0 Sep 16 10:11:00.479: INFO: Container run1 ready: false, restart count 0 Sep 16 10:11:00.479: INFO: liveness-d861d732-5303-45d4-9735-e928603badb5 started at 2019-09-16 10:08:39 +0000 UTC (0+1 container statuses recorded) Sep 16 10:11:00.479: INFO: Container liveness ready: true, restart count 0 W0916 10:11:00.480734 2879 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:11:00.568: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:11:00.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-4785" for this suite. Sep 16 10:11:06.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:11:06.612: INFO: namespace pod-network-test-4785 deletion completed in 6.042564564s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\shttp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc00021ad80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_ubuntu_03.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:10:42.453: INFO: Skipping waiting for service account [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-5195 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:10:42.453: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:10:42.454: INFO: Unschedulable nodes: Sep 16 10:10:42.454: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:42.454: INFO: ================================ Sep 16 10:11:12.455: INFO: Unschedulable nodes: Sep 16 10:11:12.455: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:12.456: INFO: ================================ Sep 16 10:11:42.456: INFO: Unschedulable nodes: Sep 16 10:11:42.456: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:42.456: INFO: ================================ Sep 16 10:12:12.456: INFO: Unschedulable nodes: Sep 16 10:12:12.456: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:12.456: INFO: ================================ Sep 16 10:12:42.455: INFO: Unschedulable nodes: Sep 16 10:12:42.456: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:42.456: INFO: ================================ Sep 16 10:13:12.455: INFO: Unschedulable nodes: Sep 16 10:13:12.455: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:12.455: INFO: ================================ Sep 16 10:13:42.456: INFO: Unschedulable nodes: Sep 16 10:13:42.456: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:42.456: INFO: ================================ Sep 16 10:14:12.456: INFO: Unschedulable nodes: Sep 16 10:14:12.456: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:12.456: INFO: ================================ Sep 16 10:14:42.456: INFO: Unschedulable nodes: Sep 16 10:14:42.456: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:42.456: INFO: ================================ Sep 16 10:15:12.455: INFO: Unschedulable nodes: Sep 16 10:15:12.455: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:12.456: INFO: ================================ Sep 16 10:15:42.455: INFO: Unschedulable nodes: Sep 16 10:15:42.455: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:42.455: INFO: ================================ Sep 16 10:16:12.457: INFO: Unschedulable nodes: Sep 16 10:16:12.457: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:12.457: INFO: ================================ Sep 16 10:16:42.456: INFO: Unschedulable nodes: Sep 16 10:16:42.456: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:42.456: INFO: ================================ Sep 16 10:17:12.455: INFO: Unschedulable nodes: Sep 16 10:17:12.455: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:12.455: INFO: ================================ Sep 16 10:17:42.456: INFO: Unschedulable nodes: Sep 16 10:17:42.456: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:42.456: INFO: ================================ Sep 16 10:18:12.455: INFO: Unschedulable nodes: Sep 16 10:18:12.455: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:12.456: INFO: ================================ Sep 16 10:18:42.455: INFO: Unschedulable nodes: Sep 16 10:18:42.455: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:42.456: INFO: ================================ Sep 16 10:19:12.455: INFO: Unschedulable nodes: Sep 16 10:19:12.455: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:12.456: INFO: ================================ Sep 16 10:19:42.455: INFO: Unschedulable nodes: Sep 16 10:19:42.456: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:42.456: INFO: ================================ Sep 16 10:20:12.455: INFO: Unschedulable nodes: Sep 16 10:20:12.455: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:12.455: INFO: ================================ Sep 16 10:20:42.457: INFO: Unschedulable nodes: Sep 16 10:20:42.457: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:42.457: INFO: ================================ Sep 16 10:20:42.459: INFO: Unschedulable nodes: Sep 16 10:20:42.459: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:42.459: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-5195". �[1mSTEP�[0m: Found 0 events. Sep 16 10:20:42.471: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:20:42.471: INFO: Sep 16 10:20:42.473: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:20:42.475: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 /api/v1/nodes/tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 e06930a8-592f-4099-a9d5-edfc75a6bb47 3517 0 2019-09-16 10:00:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872014336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3609870336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:20:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.84,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f51035bcd9db8910e22af6d51a902fac,SystemUUID:F51035BC-D9DB-8910-E22A-F6D51A902FAC,BootID:9e56941a-9d7d-4215-b61f-bae77a3d8412,KernelVersion:4.15.0-1042-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.2.7,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:20:42.475: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:20:42.476: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 W0916 10:20:42.479418 2866 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:20:42.497: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:20:42.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-5195" for this suite. Sep 16 10:20:48.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:20:48.539: INFO: namespace pod-network-test-5195 deletion completed in 6.040067941s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\shttp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc00021ad70>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_cos-stable_04.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:10:40.266: INFO: Skipping waiting for service account [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-8764 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:10:40.266: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:10:40.267: INFO: Unschedulable nodes: Sep 16 10:10:40.267: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:40.267: INFO: ================================ Sep 16 10:11:10.268: INFO: Unschedulable nodes: Sep 16 10:11:10.268: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:10.268: INFO: ================================ Sep 16 10:11:40.268: INFO: Unschedulable nodes: Sep 16 10:11:40.268: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:40.268: INFO: ================================ Sep 16 10:12:10.269: INFO: Unschedulable nodes: Sep 16 10:12:10.269: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:10.269: INFO: ================================ Sep 16 10:12:40.268: INFO: Unschedulable nodes: Sep 16 10:12:40.268: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:40.268: INFO: ================================ Sep 16 10:13:10.269: INFO: Unschedulable nodes: Sep 16 10:13:10.269: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:10.269: INFO: ================================ Sep 16 10:13:40.269: INFO: Unschedulable nodes: Sep 16 10:13:40.269: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:40.269: INFO: ================================ Sep 16 10:14:10.269: INFO: Unschedulable nodes: Sep 16 10:14:10.269: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:10.269: INFO: ================================ Sep 16 10:14:40.268: INFO: Unschedulable nodes: Sep 16 10:14:40.268: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:40.268: INFO: ================================ Sep 16 10:15:10.268: INFO: Unschedulable nodes: Sep 16 10:15:10.268: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:10.268: INFO: ================================ Sep 16 10:15:40.269: INFO: Unschedulable nodes: Sep 16 10:15:40.269: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:40.269: INFO: ================================ Sep 16 10:16:10.268: INFO: Unschedulable nodes: Sep 16 10:16:10.268: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:10.268: INFO: ================================ Sep 16 10:16:40.269: INFO: Unschedulable nodes: Sep 16 10:16:40.269: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:40.269: INFO: ================================ Sep 16 10:17:10.269: INFO: Unschedulable nodes: Sep 16 10:17:10.269: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:10.269: INFO: ================================ Sep 16 10:17:40.269: INFO: Unschedulable nodes: Sep 16 10:17:40.269: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:40.269: INFO: ================================ Sep 16 10:18:10.268: INFO: Unschedulable nodes: Sep 16 10:18:10.268: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:10.269: INFO: ================================ Sep 16 10:18:40.269: INFO: Unschedulable nodes: Sep 16 10:18:40.269: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:40.269: INFO: ================================ Sep 16 10:19:10.269: INFO: Unschedulable nodes: Sep 16 10:19:10.269: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:10.269: INFO: ================================ Sep 16 10:19:40.269: INFO: Unschedulable nodes: Sep 16 10:19:40.269: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:40.269: INFO: ================================ Sep 16 10:20:10.268: INFO: Unschedulable nodes: Sep 16 10:20:10.268: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:10.268: INFO: ================================ Sep 16 10:20:40.272: INFO: Unschedulable nodes: Sep 16 10:20:40.272: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:40.273: INFO: ================================ Sep 16 10:20:40.274: INFO: Unschedulable nodes: Sep 16 10:20:40.274: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:40.274: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-8764". �[1mSTEP�[0m: Found 0 events. Sep 16 10:20:40.277: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:20:40.277: INFO: Sep 16 10:20:40.280: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:20:40.281: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 /api/v1/nodes/tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 de1de0c3-ce1d-44d6-bb56-2e7839e24f20 3509 0 2019-09-16 10:00:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878486016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616342016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:20:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:20:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.83,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a064d78421e9c8eac5e80fe5da19fb15,SystemUUID:A064D784-21E9-C8EA-C5E8-0FE5DA19FB15,BootID:821ee2ea-1060-424d-9835-b7cdc9159dc8,KernelVersion:4.14.138+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.2.8,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:20:40.282: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:20:40.284: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 W0916 10:20:40.297663 1028 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:20:40.318: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:20:40.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-8764" for this suite. Sep 16 10:20:46.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:20:46.364: INFO: namespace pod-network-test-8764 deletion completed in 6.044641807s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\shttp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc00021ad80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_ubuntu_03.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:00:36.301: INFO: Skipping waiting for service account [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-4141 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:00:36.301: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:00:36.302: INFO: Unschedulable nodes: Sep 16 10:00:36.302: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:00:36.302: INFO: ================================ Sep 16 10:01:06.304: INFO: Unschedulable nodes: Sep 16 10:01:06.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:01:06.304: INFO: ================================ Sep 16 10:01:36.304: INFO: Unschedulable nodes: Sep 16 10:01:36.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:01:36.304: INFO: ================================ Sep 16 10:02:06.304: INFO: Unschedulable nodes: Sep 16 10:02:06.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:02:06.304: INFO: ================================ Sep 16 10:02:36.310: INFO: Unschedulable nodes: Sep 16 10:02:36.310: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:02:36.310: INFO: ================================ Sep 16 10:03:06.304: INFO: Unschedulable nodes: Sep 16 10:03:06.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:06.304: INFO: ================================ Sep 16 10:03:36.304: INFO: Unschedulable nodes: Sep 16 10:03:36.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:36.304: INFO: ================================ Sep 16 10:04:06.304: INFO: Unschedulable nodes: Sep 16 10:04:06.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:06.304: INFO: ================================ Sep 16 10:04:36.305: INFO: Unschedulable nodes: Sep 16 10:04:36.305: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:36.305: INFO: ================================ Sep 16 10:05:06.304: INFO: Unschedulable nodes: Sep 16 10:05:06.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:06.304: INFO: ================================ Sep 16 10:05:36.304: INFO: Unschedulable nodes: Sep 16 10:05:36.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:36.304: INFO: ================================ Sep 16 10:06:06.304: INFO: Unschedulable nodes: Sep 16 10:06:06.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:06.304: INFO: ================================ Sep 16 10:06:36.304: INFO: Unschedulable nodes: Sep 16 10:06:36.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:36.304: INFO: ================================ Sep 16 10:07:06.304: INFO: Unschedulable nodes: Sep 16 10:07:06.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:06.304: INFO: ================================ Sep 16 10:07:36.304: INFO: Unschedulable nodes: Sep 16 10:07:36.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:36.304: INFO: ================================ Sep 16 10:08:06.304: INFO: Unschedulable nodes: Sep 16 10:08:06.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:06.304: INFO: ================================ Sep 16 10:08:36.304: INFO: Unschedulable nodes: Sep 16 10:08:36.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:36.304: INFO: ================================ Sep 16 10:09:06.304: INFO: Unschedulable nodes: Sep 16 10:09:06.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:06.304: INFO: ================================ Sep 16 10:09:36.304: INFO: Unschedulable nodes: Sep 16 10:09:36.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:36.304: INFO: ================================ Sep 16 10:10:06.304: INFO: Unschedulable nodes: Sep 16 10:10:06.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:06.304: INFO: ================================ Sep 16 10:10:36.304: INFO: Unschedulable nodes: Sep 16 10:10:36.304: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:36.304: INFO: ================================ Sep 16 10:10:36.305: INFO: Unschedulable nodes: Sep 16 10:10:36.305: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:36.305: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-4141". �[1mSTEP�[0m: Found 0 events. Sep 16 10:10:36.313: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:10:36.313: INFO: Sep 16 10:10:36.315: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:10:36.317: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 /api/v1/nodes/tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 e06930a8-592f-4099-a9d5-edfc75a6bb47 2982 0 2019-09-16 10:00:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872014336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3609870336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:10:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:10:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:10:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:10:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.84,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f51035bcd9db8910e22af6d51a902fac,SystemUUID:F51035BC-D9DB-8910-E22A-F6D51A902FAC,BootID:9e56941a-9d7d-4215-b61f-bae77a3d8412,KernelVersion:4.15.0-1042-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.2.7,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:10:36.317: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:10:36.318: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:10:36.321: INFO: pod-init-c6cdc3e8-66fb-4cfa-8bb3-41105599daff started at 2019-09-16 10:10:28 +0000 UTC (2+1 container statuses recorded) Sep 16 10:10:36.321: INFO: Init container init1 ready: true, restart count 0 Sep 16 10:10:36.321: INFO: Init container init2 ready: true, restart count 0 Sep 16 10:10:36.321: INFO: Container run1 ready: false, restart count 0 Sep 16 10:10:36.321: INFO: liveness-d861d732-5303-45d4-9735-e928603badb5 started at 2019-09-16 10:08:39 +0000 UTC (0+1 container statuses recorded) Sep 16 10:10:36.321: INFO: Container liveness ready: true, restart count 0 Sep 16 10:10:36.321: INFO: image-pull-test96232797-8565-421d-9122-597ce7ee0312 started at 2019-09-16 10:06:22 +0000 UTC (0+1 container statuses recorded) Sep 16 10:10:36.321: INFO: Container image-pull-test ready: false, restart count 0 Sep 16 10:10:36.321: INFO: test-webserver-b28a11b9-dd3f-44cb-b3e7-26e546a152b4 started at 2019-09-16 10:06:42 +0000 UTC (0+1 container statuses recorded) Sep 16 10:10:36.321: INFO: Container test-webserver ready: true, restart count 0 W0916 10:10:36.322521 2866 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:10:36.399: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:10:36.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-4141" for this suite. Sep 16 10:10:42.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:10:42.447: INFO: namespace pod-network-test-4141 deletion completed in 6.046519341s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\shttp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc00021ad70>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_cos-stable_04.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:00:34.111: INFO: Skipping waiting for service account [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-3750 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:00:34.111: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:00:34.117: INFO: Unschedulable nodes: Sep 16 10:00:34.117: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:00:34.117: INFO: ================================ Sep 16 10:01:04.119: INFO: Unschedulable nodes: Sep 16 10:01:04.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:01:04.119: INFO: ================================ Sep 16 10:01:34.134: INFO: Unschedulable nodes: Sep 16 10:01:34.134: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:01:34.134: INFO: ================================ Sep 16 10:02:04.119: INFO: Unschedulable nodes: Sep 16 10:02:04.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:02:04.119: INFO: ================================ Sep 16 10:02:34.127: INFO: Unschedulable nodes: Sep 16 10:02:34.127: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:02:34.128: INFO: ================================ Sep 16 10:03:04.119: INFO: Unschedulable nodes: Sep 16 10:03:04.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:04.119: INFO: ================================ Sep 16 10:03:34.119: INFO: Unschedulable nodes: Sep 16 10:03:34.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:34.119: INFO: ================================ Sep 16 10:04:04.120: INFO: Unschedulable nodes: Sep 16 10:04:04.120: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:04.120: INFO: ================================ Sep 16 10:04:34.124: INFO: Unschedulable nodes: Sep 16 10:04:34.124: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:34.124: INFO: ================================ Sep 16 10:05:04.119: INFO: Unschedulable nodes: Sep 16 10:05:04.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:04.119: INFO: ================================ Sep 16 10:05:34.119: INFO: Unschedulable nodes: Sep 16 10:05:34.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:34.119: INFO: ================================ Sep 16 10:06:04.119: INFO: Unschedulable nodes: Sep 16 10:06:04.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:04.119: INFO: ================================ Sep 16 10:06:34.119: INFO: Unschedulable nodes: Sep 16 10:06:34.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:34.119: INFO: ================================ Sep 16 10:07:04.119: INFO: Unschedulable nodes: Sep 16 10:07:04.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:04.119: INFO: ================================ Sep 16 10:07:34.119: INFO: Unschedulable nodes: Sep 16 10:07:34.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:34.119: INFO: ================================ Sep 16 10:08:04.119: INFO: Unschedulable nodes: Sep 16 10:08:04.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:04.119: INFO: ================================ Sep 16 10:08:34.119: INFO: Unschedulable nodes: Sep 16 10:08:34.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:34.119: INFO: ================================ Sep 16 10:09:04.119: INFO: Unschedulable nodes: Sep 16 10:09:04.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:04.119: INFO: ================================ Sep 16 10:09:34.119: INFO: Unschedulable nodes: Sep 16 10:09:34.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:34.119: INFO: ================================ Sep 16 10:10:04.119: INFO: Unschedulable nodes: Sep 16 10:10:04.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:04.119: INFO: ================================ Sep 16 10:10:34.119: INFO: Unschedulable nodes: Sep 16 10:10:34.119: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:34.119: INFO: ================================ Sep 16 10:10:34.120: INFO: Unschedulable nodes: Sep 16 10:10:34.120: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:34.120: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-3750". �[1mSTEP�[0m: Found 0 events. Sep 16 10:10:34.124: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:10:34.124: INFO: Sep 16 10:10:34.127: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:10:34.129: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 /api/v1/nodes/tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 de1de0c3-ce1d-44d6-bb56-2e7839e24f20 3092 0 2019-09-16 10:00:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878486016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616342016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:10:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:10:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:10:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:10:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.83,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a064d78421e9c8eac5e80fe5da19fb15,SystemUUID:A064D784-21E9-C8EA-C5E8-0FE5DA19FB15,BootID:821ee2ea-1060-424d-9835-b7cdc9159dc8,KernelVersion:4.14.138+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.2.8,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:10:34.129: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:10:34.130: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:10:34.133: INFO: busybox-39a611e4-31d9-4aff-b302-2e2d76ec74b8 started at 2019-09-16 10:08:44 +0000 UTC (0+1 container statuses recorded) Sep 16 10:10:34.133: INFO: Container busybox ready: true, restart count 0 Sep 16 10:10:34.133: INFO: image-pull-test83cf1c31-1e88-4e8a-a975-822cbeca2de9 started at 2019-09-16 10:08:06 +0000 UTC (0+1 container statuses recorded) Sep 16 10:10:34.133: INFO: Container image-pull-test ready: true, restart count 0 Sep 16 10:10:34.133: INFO: static-pod-c5149ca4-a732-410b-9929-9871f3f62c1e-tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 started at 2019-09-16 10:09:16 +0000 UTC (0+1 container statuses recorded) Sep 16 10:10:34.133: INFO: Container test ready: true, restart count 0 W0916 10:10:34.134665 1028 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:10:34.210: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:10:34.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-3750" for this suite. Sep 16 10:10:40.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:10:40.261: INFO: namespace pod-network-test-3750 deletion completed in 6.048947213s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\sudp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc000218d80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_cos-stable_07.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:11:29.639: INFO: Skipping waiting for service account [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-5910 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:11:29.639: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:11:29.640: INFO: Unschedulable nodes: Sep 16 10:11:29.640: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:29.640: INFO: ================================ Sep 16 10:11:59.642: INFO: Unschedulable nodes: Sep 16 10:11:59.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:59.642: INFO: ================================ Sep 16 10:12:29.641: INFO: Unschedulable nodes: Sep 16 10:12:29.641: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:29.642: INFO: ================================ Sep 16 10:12:59.642: INFO: Unschedulable nodes: Sep 16 10:12:59.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:59.642: INFO: ================================ Sep 16 10:13:29.642: INFO: Unschedulable nodes: Sep 16 10:13:29.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:29.642: INFO: ================================ Sep 16 10:13:59.642: INFO: Unschedulable nodes: Sep 16 10:13:59.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:59.642: INFO: ================================ Sep 16 10:14:29.642: INFO: Unschedulable nodes: Sep 16 10:14:29.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:29.642: INFO: ================================ Sep 16 10:14:59.641: INFO: Unschedulable nodes: Sep 16 10:14:59.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:59.642: INFO: ================================ Sep 16 10:15:29.642: INFO: Unschedulable nodes: Sep 16 10:15:29.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:29.642: INFO: ================================ Sep 16 10:15:59.641: INFO: Unschedulable nodes: Sep 16 10:15:59.641: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:59.641: INFO: ================================ Sep 16 10:16:29.642: INFO: Unschedulable nodes: Sep 16 10:16:29.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:29.642: INFO: ================================ Sep 16 10:16:59.642: INFO: Unschedulable nodes: Sep 16 10:16:59.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:59.642: INFO: ================================ Sep 16 10:17:29.641: INFO: Unschedulable nodes: Sep 16 10:17:29.641: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:29.641: INFO: ================================ Sep 16 10:17:59.642: INFO: Unschedulable nodes: Sep 16 10:17:59.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:59.642: INFO: ================================ Sep 16 10:18:29.642: INFO: Unschedulable nodes: Sep 16 10:18:29.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:29.642: INFO: ================================ Sep 16 10:18:59.641: INFO: Unschedulable nodes: Sep 16 10:18:59.641: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:59.641: INFO: ================================ Sep 16 10:19:29.642: INFO: Unschedulable nodes: Sep 16 10:19:29.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:29.642: INFO: ================================ Sep 16 10:19:59.642: INFO: Unschedulable nodes: Sep 16 10:19:59.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:59.642: INFO: ================================ Sep 16 10:20:29.642: INFO: Unschedulable nodes: Sep 16 10:20:29.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:29.642: INFO: ================================ Sep 16 10:20:59.642: INFO: Unschedulable nodes: Sep 16 10:20:59.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:59.642: INFO: ================================ Sep 16 10:21:29.642: INFO: Unschedulable nodes: Sep 16 10:21:29.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:29.642: INFO: ================================ Sep 16 10:21:29.642: INFO: Unschedulable nodes: Sep 16 10:21:29.642: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:29.642: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-5910". �[1mSTEP�[0m: Found 0 events. Sep 16 10:21:29.645: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:21:29.645: INFO: Sep 16 10:21:29.647: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:21:29.649: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 /api/v1/nodes/tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 de1de0c3-ce1d-44d6-bb56-2e7839e24f20 3526 0 2019-09-16 10:00:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878486016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616342016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:21:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:21:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:21:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:21:09 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.83,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a064d78421e9c8eac5e80fe5da19fb15,SystemUUID:A064D784-21E9-C8EA-C5E8-0FE5DA19FB15,BootID:821ee2ea-1060-424d-9835-b7cdc9159dc8,KernelVersion:4.14.138+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.2.8,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:21:29.649: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:21:29.650: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 W0916 10:21:29.653451 1005 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:21:29.673: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:21:29.673: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-5910" for this suite. Sep 16 10:21:35.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:21:35.719: INFO: namespace pod-network-test-5910 deletion completed in 6.044675101s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\sudp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc000218d80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_cos-stable_07.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:01:23.512: INFO: Skipping waiting for service account [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-8768 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:01:23.512: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:01:23.517: INFO: Unschedulable nodes: Sep 16 10:01:23.517: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:01:23.517: INFO: ================================ Sep 16 10:01:53.518: INFO: Unschedulable nodes: Sep 16 10:01:53.519: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:01:53.519: INFO: ================================ Sep 16 10:02:23.519: INFO: Unschedulable nodes: Sep 16 10:02:23.519: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:02:23.519: INFO: ================================ Sep 16 10:02:53.518: INFO: Unschedulable nodes: Sep 16 10:02:53.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:02:53.518: INFO: ================================ Sep 16 10:03:23.519: INFO: Unschedulable nodes: Sep 16 10:03:23.519: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:23.519: INFO: ================================ Sep 16 10:03:53.518: INFO: Unschedulable nodes: Sep 16 10:03:53.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:53.518: INFO: ================================ Sep 16 10:04:23.518: INFO: Unschedulable nodes: Sep 16 10:04:23.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:23.518: INFO: ================================ Sep 16 10:04:53.519: INFO: Unschedulable nodes: Sep 16 10:04:53.519: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:53.519: INFO: ================================ Sep 16 10:05:23.518: INFO: Unschedulable nodes: Sep 16 10:05:23.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:23.518: INFO: ================================ Sep 16 10:05:53.518: INFO: Unschedulable nodes: Sep 16 10:05:53.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:53.518: INFO: ================================ Sep 16 10:06:23.519: INFO: Unschedulable nodes: Sep 16 10:06:23.519: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:23.519: INFO: ================================ Sep 16 10:06:53.518: INFO: Unschedulable nodes: Sep 16 10:06:53.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:53.518: INFO: ================================ Sep 16 10:07:23.518: INFO: Unschedulable nodes: Sep 16 10:07:23.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:23.518: INFO: ================================ Sep 16 10:07:53.518: INFO: Unschedulable nodes: Sep 16 10:07:53.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:53.518: INFO: ================================ Sep 16 10:08:23.519: INFO: Unschedulable nodes: Sep 16 10:08:23.519: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:23.519: INFO: ================================ Sep 16 10:08:53.519: INFO: Unschedulable nodes: Sep 16 10:08:53.519: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:53.519: INFO: ================================ Sep 16 10:09:23.518: INFO: Unschedulable nodes: Sep 16 10:09:23.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:23.518: INFO: ================================ Sep 16 10:09:53.518: INFO: Unschedulable nodes: Sep 16 10:09:53.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:53.518: INFO: ================================ Sep 16 10:10:23.518: INFO: Unschedulable nodes: Sep 16 10:10:23.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:23.518: INFO: ================================ Sep 16 10:10:53.518: INFO: Unschedulable nodes: Sep 16 10:10:53.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:53.518: INFO: ================================ Sep 16 10:11:23.518: INFO: Unschedulable nodes: Sep 16 10:11:23.518: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:23.518: INFO: ================================ Sep 16 10:11:23.519: INFO: Unschedulable nodes: Sep 16 10:11:23.519: INFO: -> tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:23.520: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-8768". �[1mSTEP�[0m: Found 0 events. Sep 16 10:11:23.523: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:11:23.523: INFO: Sep 16 10:11:23.526: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:11:23.528: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 /api/v1/nodes/tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 de1de0c3-ce1d-44d6-bb56-2e7839e24f20 3318 0 2019-09-16 10:00:07 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16684785664 0} {<nil>} BinarySI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3878486016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{15016307073 0} {<nil>} 15016307073 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3616342016 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:11:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:11:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:11:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:11:08 +0000 UTC,LastTransitionTime:2019-09-16 10:00:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.83,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-cos-73-11647-293-0,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:a064d78421e9c8eac5e80fe5da19fb15,SystemUUID:A064D784-21E9-C8EA-C5E8-0FE5DA19FB15,BootID:821ee2ea-1060-424d-9835-b7cdc9159dc8,KernelVersion:4.14.138+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.2.8,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:11:23.528: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:11:23.529: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:11:23.532: INFO: privileged-pod started at 2019-09-16 10:11:04 +0000 UTC (0+2 container statuses recorded) Sep 16 10:11:23.532: INFO: Container not-privileged-container ready: true, restart count 0 Sep 16 10:11:23.532: INFO: Container privileged-container ready: true, restart count 0 Sep 16 10:11:23.532: INFO: busybox-39a611e4-31d9-4aff-b302-2e2d76ec74b8 started at 2019-09-16 10:08:44 +0000 UTC (0+1 container statuses recorded) Sep 16 10:11:23.532: INFO: Container busybox ready: true, restart count 0 Sep 16 10:11:23.532: INFO: image-pull-test83cf1c31-1e88-4e8a-a975-822cbeca2de9 started at 2019-09-16 10:08:06 +0000 UTC (0+1 container statuses recorded) Sep 16 10:11:23.532: INFO: Container image-pull-test ready: true, restart count 0 W0916 10:11:23.533558 1005 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:11:23.586: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-cos-73-11647-293-0 Sep 16 10:11:23.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-8768" for this suite. Sep 16 10:11:29.592: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:11:29.633: INFO: namespace pod-network-test-8768 deletion completed in 6.045248271s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\sudp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc00021ad80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_ubuntu_08.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:12:04.961: INFO: Skipping waiting for service account [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-6023 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:12:04.961: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:12:04.963: INFO: Unschedulable nodes: Sep 16 10:12:04.963: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:04.963: INFO: ================================ Sep 16 10:12:34.964: INFO: Unschedulable nodes: Sep 16 10:12:34.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:12:34.964: INFO: ================================ Sep 16 10:13:04.964: INFO: Unschedulable nodes: Sep 16 10:13:04.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:04.964: INFO: ================================ Sep 16 10:13:34.964: INFO: Unschedulable nodes: Sep 16 10:13:34.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:13:34.964: INFO: ================================ Sep 16 10:14:04.964: INFO: Unschedulable nodes: Sep 16 10:14:04.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:04.964: INFO: ================================ Sep 16 10:14:34.964: INFO: Unschedulable nodes: Sep 16 10:14:34.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:14:34.964: INFO: ================================ Sep 16 10:15:04.964: INFO: Unschedulable nodes: Sep 16 10:15:04.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:04.964: INFO: ================================ Sep 16 10:15:34.964: INFO: Unschedulable nodes: Sep 16 10:15:34.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:15:34.964: INFO: ================================ Sep 16 10:16:04.964: INFO: Unschedulable nodes: Sep 16 10:16:04.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:04.964: INFO: ================================ Sep 16 10:16:34.964: INFO: Unschedulable nodes: Sep 16 10:16:34.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:16:34.964: INFO: ================================ Sep 16 10:17:04.964: INFO: Unschedulable nodes: Sep 16 10:17:04.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:04.964: INFO: ================================ Sep 16 10:17:34.964: INFO: Unschedulable nodes: Sep 16 10:17:34.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:17:34.964: INFO: ================================ Sep 16 10:18:04.964: INFO: Unschedulable nodes: Sep 16 10:18:04.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:04.964: INFO: ================================ Sep 16 10:18:34.964: INFO: Unschedulable nodes: Sep 16 10:18:34.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:18:34.964: INFO: ================================ Sep 16 10:19:04.964: INFO: Unschedulable nodes: Sep 16 10:19:04.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:04.964: INFO: ================================ Sep 16 10:19:34.964: INFO: Unschedulable nodes: Sep 16 10:19:34.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:19:34.964: INFO: ================================ Sep 16 10:20:04.964: INFO: Unschedulable nodes: Sep 16 10:20:04.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:04.964: INFO: ================================ Sep 16 10:20:34.964: INFO: Unschedulable nodes: Sep 16 10:20:34.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:20:34.964: INFO: ================================ Sep 16 10:21:04.964: INFO: Unschedulable nodes: Sep 16 10:21:04.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:04.964: INFO: ================================ Sep 16 10:21:34.965: INFO: Unschedulable nodes: Sep 16 10:21:34.965: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:21:34.966: INFO: ================================ Sep 16 10:22:04.964: INFO: Unschedulable nodes: Sep 16 10:22:04.964: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:22:04.964: INFO: ================================ Sep 16 10:22:04.965: INFO: Unschedulable nodes: Sep 16 10:22:04.965: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:22:04.965: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-6023". �[1mSTEP�[0m: Found 0 events. Sep 16 10:22:04.967: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:22:04.967: INFO: Sep 16 10:22:04.970: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:22:04.971: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 /api/v1/nodes/tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 e06930a8-592f-4099-a9d5-edfc75a6bb47 3540 0 2019-09-16 10:00:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872014336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3609870336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:21:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:21:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:21:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:21:15 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.84,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f51035bcd9db8910e22af6d51a902fac,SystemUUID:F51035BC-D9DB-8910-E22A-F6D51A902FAC,BootID:9e56941a-9d7d-4215-b61f-bae77a3d8412,KernelVersion:4.15.0-1042-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.2.7,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:22:04.971: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:22:04.972: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 W0916 10:22:04.976939 2851 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:22:04.996: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:22:04.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-6023" for this suite. Sep 16 10:22:11.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:22:11.096: INFO: namespace pod-network-test-6023 deletion completed in 6.098608429s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=E2eNode\sSuite\s\[sig\-network\]\sNetworking\sGranular\sChecks\:\sPods\sshould\sfunction\sfor\snode\-pod\scommunication\:\sudp\s\[LinuxOnly\]\s\[NodeConformance\]\s\[Conformance\]$'
/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Unexpected error: <*errors.errorString | 0xc00021ad80>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/networking_utils.go:635from junit_ubuntu_08.xml
[BeforeEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client �[1mSTEP�[0m: Building a namespace api object, basename pod-network-test Sep 16 10:01:58.811: INFO: Skipping waiting for service account [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 �[1mSTEP�[0m: Performing setup for networking test in namespace pod-network-test-3013 �[1mSTEP�[0m: creating a selector �[1mSTEP�[0m: Creating the service pods in kubernetes Sep 16 10:01:58.811: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Sep 16 10:01:58.813: INFO: Unschedulable nodes: Sep 16 10:01:58.813: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:01:58.813: INFO: ================================ Sep 16 10:02:28.814: INFO: Unschedulable nodes: Sep 16 10:02:28.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:02:28.814: INFO: ================================ Sep 16 10:02:58.814: INFO: Unschedulable nodes: Sep 16 10:02:58.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:02:58.814: INFO: ================================ Sep 16 10:03:28.820: INFO: Unschedulable nodes: Sep 16 10:03:28.820: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:28.820: INFO: ================================ Sep 16 10:03:58.814: INFO: Unschedulable nodes: Sep 16 10:03:58.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:03:58.814: INFO: ================================ Sep 16 10:04:28.818: INFO: Unschedulable nodes: Sep 16 10:04:28.818: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:28.818: INFO: ================================ Sep 16 10:04:58.814: INFO: Unschedulable nodes: Sep 16 10:04:58.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:04:58.814: INFO: ================================ Sep 16 10:05:28.814: INFO: Unschedulable nodes: Sep 16 10:05:28.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:28.814: INFO: ================================ Sep 16 10:05:58.814: INFO: Unschedulable nodes: Sep 16 10:05:58.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:05:58.814: INFO: ================================ Sep 16 10:06:28.814: INFO: Unschedulable nodes: Sep 16 10:06:28.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:28.814: INFO: ================================ Sep 16 10:06:58.814: INFO: Unschedulable nodes: Sep 16 10:06:58.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:06:58.814: INFO: ================================ Sep 16 10:07:28.814: INFO: Unschedulable nodes: Sep 16 10:07:28.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:28.814: INFO: ================================ Sep 16 10:07:58.814: INFO: Unschedulable nodes: Sep 16 10:07:58.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:07:58.814: INFO: ================================ Sep 16 10:08:28.814: INFO: Unschedulable nodes: Sep 16 10:08:28.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:28.814: INFO: ================================ Sep 16 10:08:58.814: INFO: Unschedulable nodes: Sep 16 10:08:58.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:08:58.814: INFO: ================================ Sep 16 10:09:28.814: INFO: Unschedulable nodes: Sep 16 10:09:28.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:28.814: INFO: ================================ Sep 16 10:09:58.814: INFO: Unschedulable nodes: Sep 16 10:09:58.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:09:58.814: INFO: ================================ Sep 16 10:10:28.815: INFO: Unschedulable nodes: Sep 16 10:10:28.815: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:28.815: INFO: ================================ Sep 16 10:10:58.814: INFO: Unschedulable nodes: Sep 16 10:10:58.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:10:58.814: INFO: ================================ Sep 16 10:11:28.814: INFO: Unschedulable nodes: Sep 16 10:11:28.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:28.814: INFO: ================================ Sep 16 10:11:58.814: INFO: Unschedulable nodes: Sep 16 10:11:58.814: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:58.814: INFO: ================================ Sep 16 10:11:58.815: INFO: Unschedulable nodes: Sep 16 10:11:58.815: INFO: -> tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Ready=true Network=false Taints=[{node.kubernetes.io/not-ready NoSchedule <nil>}] NonblockingTaints:node-role.kubernetes.io/master Sep 16 10:11:58.815: INFO: ================================ [AfterEach] [sig-network] Networking /go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pod-network-test-3013". �[1mSTEP�[0m: Found 0 events. Sep 16 10:11:58.819: INFO: POD NODE PHASE GRACE CONDITIONS Sep 16 10:11:58.819: INFO: Sep 16 10:11:58.821: INFO: Logging node info for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:11:58.823: INFO: Node Info: &Node{ObjectMeta:{tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 /api/v1/nodes/tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 e06930a8-592f-4099-a9d5-edfc75a6bb47 3136 0 2019-09-16 10:00:14 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 kubernetes.io/os:linux] map[volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{Taint{Key:node.kubernetes.io/not-ready,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{20629221376 0} {<nil>} 20145724Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3872014336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{18566299208 0} {<nil>} 18566299208 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3609870336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-09-16 10:11:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-09-16 10:11:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-09-16 10:11:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-09-16 10:11:14 +0000 UTC,LastTransitionTime:2019-09-16 10:00:10 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.84,},NodeAddress{Type:Hostname,Address:tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f51035bcd9db8910e22af6d51a902fac,SystemUUID:F51035BC-D9DB-8910-E22A-F6D51A902FAC,BootID:9e56941a-9d7d-4215-b61f-bae77a3d8412,KernelVersion:4.15.0-1042-gke,OSImage:Ubuntu 18.04.3 LTS,ContainerRuntimeVersion:containerd://1.2.7,KubeletVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,KubeProxyVersion:v1.17.0-alpha.0.1445+4640b4f81ec6bc,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[docker.io/library/perl@sha256:978a36abce7dcf726bcdbb3f5b0d69ad3beb0cf688e9348a488f6f6023a027db docker.io/library/perl:5.26],SizeBytes:325130745,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64@sha256:80d4564d5ab49ecfea3b20f75cc676d8dfd8b2aca364ed4c1a8a55fbcaaed7f6 gcr.io/kubernetes-e2e-test-images/node-perf/tf-wide-deep-amd64:1.0],SizeBytes:242137147,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/gluster@sha256:e2d3308b2d27499d59f120ff46dfc6c4cb307a3f207f02894ecab902583761c9 gcr.io/kubernetes-e2e-test-images/volume/gluster:1.0],SizeBytes:111775822,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:82348896,},ContainerImage{Names:[docker.io/library/httpd@sha256:6feb0ea7b0967367da66e8d58ba813fde32bdb92f63bfc21a9e170d211539db4 docker.io/library/httpd:2.4.38-alpine],SizeBytes:40762646,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-is@sha256:9d08dd99565b25af37c990cd4474a4284b27e7ceb3f98328bb481edefedf8aa5 gcr.io/kubernetes-e2e-test-images/node-perf/npb-is:1.0],SizeBytes:39644608,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep@sha256:564314549347619cfcdbe6c7d042a29e133a00e922b37682890fff17ac1a7804 gcr.io/kubernetes-e2e-test-images/node-perf/npb-ep:1.0],SizeBytes:39643641,},ContainerImage{Names:[k8s.gcr.io/node-problem-detector@sha256:6e9b4a4eaa47f120be61f60573a545844de63401661812e2cfb7ae81a28efd19 k8s.gcr.io/node-problem-detector:v0.6.2],SizeBytes:33121906,},ContainerImage{Names:[docker.io/google/cadvisor@sha256:815386ebbe9a3490f38785ab11bda34ec8dacf4634af77b8912832d4f85dca04 docker.io/google/cadvisor:latest],SizeBytes:30530401,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:18352698,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:17748863,},ContainerImage{Names:[docker.io/library/nginx@sha256:a3a0c4126587884f8d3090efca87f5af075d7e7ac8308cffc09a5a082d5f4760 docker.io/library/nginx:1.14-alpine],SizeBytes:6976771,},ContainerImage{Names:[k8s.gcr.io/nvidia-gpu-device-plugin@sha256:4b036e8844920336fa48f36edeb7d4398f426d6a934ba022848deed2edbf09aa],SizeBytes:6819465,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/ipc-utils@sha256:bb127be3a1ecac0516f672a5e223d94fe6021021534ecb7a02a607a63154c3d8 gcr.io/kubernetes-e2e-test-images/ipc-utils:1.0],SizeBytes:4004104,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:3054649,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:2110879,},ContainerImage{Names:[docker.io/library/alpine@sha256:92251458088c638061cda8fd8b403b76d661a4dc6b7ee71b6affcf1872557b2b docker.io/library/alpine:3.7],SizeBytes:2109138,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:1791163,},ContainerImage{Names:[k8s.gcr.io/stress:v1],SizeBytes:1558004,},ContainerImage{Names:[docker.io/library/busybox@sha256:e004c2cc521c95383aebb1fb5893719aa7a8eae2e7a71f316a4410784edb00a9 docker.io/library/busybox:1.29],SizeBytes:729986,},ContainerImage{Names:[k8s.gcr.io/busybox@sha256:4bdd623e848417d96127e16037743f0cd8b528c026e9175e22a84f639eca58ff],SizeBytes:676941,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:599341,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:539309,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:317164,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:&NodeConfigStatus{Assigned:nil,Active:nil,LastKnownGood:nil,Error:,},},} Sep 16 10:11:58.824: INFO: Logging kubelet events for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:11:58.825: INFO: Logging pods the kubelet thinks is on node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:11:58.827: INFO: pod-update-5e865de1-5502-4975-bad9-cebfca16e81f started at 2019-09-16 10:11:41 +0000 UTC (0+1 container statuses recorded) Sep 16 10:11:58.827: INFO: Container nginx ready: false, restart count 0 Sep 16 10:11:58.827: INFO: liveness-d861d732-5303-45d4-9735-e928603badb5 started at 2019-09-16 10:08:39 +0000 UTC (0+1 container statuses recorded) Sep 16 10:11:58.827: INFO: Container liveness ready: true, restart count 0 W0916 10:11:58.828548 2851 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Sep 16 10:11:58.885: INFO: Latency metrics for node tmp-node-e2e-d8aaa33e-ubuntu-gke-1804-d1809-0-v20190913 Sep 16 10:11:58.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pod-network-test-3013" for this suite. Sep 16 10:12:04.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Sep 16 10:12:04.951: INFO: namespace pod-network-test-3013 deletion completed in 6.065264089s
Filter through log files | View test history on testgrid
error during go run /go/src/k8s.io/kubernetes/test/e2e_node/runner/remote/run_remote.go --cleanup --logtostderr --vmodule=*=4 --ssh-env=gce --results-dir=/workspace/_artifacts --project=k8s-c8d-pr-node-e2e --zone=us-west1-b --ssh-user=prow --ssh-key=/workspace/.ssh/google_compute_engine --ginkgo-flags=--nodes=8 --focus="\[NodeConformance\]" --skip="\[Flaky\]|\[Slow\]|\[Serial\]" --flakeAttempts=2 --test_args=--container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --container-runtime-process-name=/usr/bin/containerd --container-runtime-pid-file= --kubelet-flags="--cgroups-per-qos=true --cgroup-root=/ --runtime-cgroups=/system.slice/containerd.service" --extra-log="{\"name\": \"containerd.log\", \"journalctl\": [\"-u\", \"containerd\"]}" --test-timeout=1h5m0s --image-config-file=/workspace/test-infra/jobs/e2e_node/containerd/image-config.yaml: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
Deferred TearDown
DumpClusterLogs
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime Conformance Test container runtime conformance blackbox test when running a container with a new image should be able to pull from private registry with credential provider [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io [NodeConformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct cri log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [k8s.io] ContainerLogPath [NodeConformance] Pod with a container printed log to stdout should print log to correct log path
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a BestEffort Pod Pod containers should have been created under the BestEffort cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Burstable Pod Pod containers should have been created under the Burstable cgroup
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [k8s.io] Kubelet Cgroup Manager Pod containers [NodeConformance] On scheduling a Guaranteed Pod Pod containers should have been created under the cgroup-root
E2eNode Suite [k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Cgroup Manager QOS containers On enabling QOS cgroup hierarchy Top level QoS containers should have been created [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [k8s.io] Kubelet Volume Manager Volume Manager On terminatation of pod with memory backed volume should remove the volume from the node [NodeConformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod forcibly deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be recreated when mirror pod gracefully deleted [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [k8s.io] MirrorPod when create a mirror pod should be updated when static pod updated [NodeConformance]
E2eNode Suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should be updated [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support r/w [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [sig-storage] HostPath should support subPath [NodeConformance]
E2eNode Suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
E2eNode Suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
TearDown
TearDown Previous
Timeout
Up
test setup
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a permissive profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should enforce a profile blocking writes
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
E2eNode Suite [k8s.io] AppArmor [Feature:AppArmor][NodeFeature:AppArmor] when running with AppArmor should reject an unloaded profile
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup pod infra containers oom-score-adj should be -998 and best effort container's should be 1000
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup Kubelet's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup burstable container's oom-score-adj should be between [2, 1000)
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup container runtime's oom-score-adj should be -999
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [k8s.io] Container Manager Misc [Serial] Validate OOM score adjustments [NodeFeature:OOMScoreAdj] once the node is setup guaranteed container's oom-score-adj should be -998
E2eNode Suite [k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [k8s.io] ContainerLogRotation [Slow] [Serial] [Disruptive] when a container generates a lot of log should be rotated and limited to a fixed amount of files
E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
E2eNode Suite [k8s.io] CriticalPod [Serial] [Disruptive] [NodeFeature:CriticalPod] when we need to admit a critical pod should be able to create and delete a critical pod
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 10 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 105 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 0s interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 100ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods latency/resource should be within limit when create 35 pods with 300ms interval [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 0s interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 100ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a batch of pods with higher API QPS latency/resource should be within limit when create 105 pods with 300ms interval (QPS 60) [Benchmark][NodeSpecialFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 10 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 30 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Density [Serial] [Slow] create a sequence of pods latency/resource should be within limit when create 50 pods with 50 background pods [Benchmark][NodeSpeicalFeature:Benchmark]
E2eNode Suite [k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
E2eNode Suite [k8s.io] Device Plugin [Feature:DevicePluginProbe][NodeFeature:DevicePluginProbe][Serial] DevicePlugin Verifies the Kubelet device plugin functionality.
E2eNode Suite [k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
E2eNode Suite [k8s.io] Docker features [Feature:Docker][Legacy:Docker] when live-restore is enabled [Serial] [Slow] [Disruptive] containers should not be disrupted when the daemon shuts down and restarts
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The GCR is accessible
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker configuration validation should pass
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker container network should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker daemon should support AppArmor and seccomp
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The docker storage driver should work
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The iptable rules should work (required by kube-proxy)
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
E2eNode Suite [k8s.io] GKE system requirements [NodeConformance][Feature:GKEEnv][NodeFeature:GKEEnv] The required processes should be running
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Pods with Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: Many Restarting Containers Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] GarbageCollect [Serial][NodeFeature:GarbageCollect] Garbage Collection Test: One Non-restarting Container Should eventually garbage collect containers when we exceed the number of dead containers per container
E2eNode Suite [k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] ImageGCNoEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [k8s.io] ImageID [NodeFeature: ImageID] should be set to the manifest digest (from RepoDigests) when available
E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] InodeEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] Lease API should be available
E2eNode Suite [k8s.io] Lease API should be available
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationEviction [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolation][NodeFeature:Eviction] when we run containers that should cause evictions due to pod local storage violations should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: false) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageCapacityIsolationQuotaMonitoring [Slow] [Serial] [Disruptive] [Feature:LocalStorageCapacityIsolationQuota][NodeFeature:LSCIQuotaMonitoring] when we run containers that should cause use quotas for LSCI monitoring (quotas enabled: true) should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] LocalStorageSoftEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] MemoryAllocatableEviction [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
E2eNode Suite [k8s.io] NVIDIA GPU Device Plugin [Feature:GPUDevicePlugin][NodeFeature:GPUDevicePlugin][Serial] [Disruptive] DevicePlugin checks that when Kubelet restarts exclusive GPU assignation to pods is kept.
E2eNode Suite [k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [k8s.io] Node Container Manager [Serial] Validate Node Allocatable [NodeFeature:NodeAllocatable] sets up the node and runs the test
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
E2eNode Suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
E2eNode Suite [k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [k8s.io] NodeProblemDetector [NodeFeature:NodeProblemDetector] [k8s.io] SystemLogMonitor should generate node condition and events for corresponding errors
E2eNode Suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
E2eNode Suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
E2eNode Suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityLocalStorageEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause DiskPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityMemoryEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause MemoryPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] PriorityPidEvictionOrdering [Slow] [Serial] [Disruptive][NodeFeature:Eviction] when we run containers that should cause PIDPressure should eventually evict all of the correct pods
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
E2eNode Suite [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
E2eNode Suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe
E2eNode Suite [k8s.io] ResourceMetricsAPI when querying /resource/metrics should report resource usage through the v1alpha1 resouce metrics api
E2eNode Suite [k8s.io] ResourceMetricsAPI when querying /resource/metrics should report resource usage through the v1alpha1 resouce metrics api
E2eNode Suite [k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
E2eNode Suite [k8s.io] Restart [Serial] [Slow] [Disruptive] [NodeFeature:ContainerRuntimeRestart] Container Runtime Network should recover from ip leak
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context When creating a pod with privileged should run the container as privileged when true [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should not show the shared memory ID in the non-hostIPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host IPC namespace should show the shared memory ID in the host IPC containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should not show its pid in the non-hostpid containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host PID namespace should show its pid in the host PID namespace [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace should listen on same port in the host network containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when creating a pod in the host network namespace shouldn't show the same port in the non-hostnetwork containers [NodeFeature:HostAccess]
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] containers in pods using isolated PID namespaces should all receive PID 1
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
E2eNode Suite [k8s.io] Security Context when pod PID namespace is configurable [Feature:ShareProcessNamespace][NodeAlphaFeature:ShareProcessNamespace] processes in containers sharing a pod namespace should be able to see each other [Alpha]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeFeature:StartupProbe] when a container has a startup probe should *not* be restarted with a exec "cat /tmp/health" because startup probe delays it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeFeature:StartupProbe] when a container has a startup probe should *not* be restarted with a exec "cat /tmp/health" because startup probe delays it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeFeature:StartupProbe] when a container has a startup probe should be restarted with a exec "cat /tmp/health" after startup probe succeeds it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeFeature:StartupProbe] when a container has a startup probe should be restarted with a exec "cat /tmp/health" after startup probe succeeds it [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeFeature:StartupProbe] when a container has a startup probe should be restarted with a exec "cat /tmp/health" because startup probe does not delay it long enough [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeFeature:StartupProbe] when a container has a startup probe should be restarted with a exec "cat /tmp/health" because startup probe does not delay it long enough [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeFeature:StartupProbe] when a container has a startup probe should not be ready until startupProbe succeeds [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] StartupProbe [Serial] [Disruptive] [NodeFeature:StartupProbe] when a container has a startup probe should not be ready until startupProbe succeeds [NodeConformance] [Conformance]
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
E2eNode Suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
E2eNode Suite [k8s.io] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [k8s.io] SystemNodeCriticalPod [Slow] [Serial] [Disruptive] [NodeFeature:SystemNodeCriticalPod] when create a system-node-critical pod should not be evicted upon DiskPressure
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage][NodeFeature:VolumeSubpathEnvExpansion]
E2eNode Suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage][NodeFeature:VolumeSubpathEnvExpansion]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: error while ConfigMap is absent: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] delete and recreate ConfigMap: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: recover to last-known-good version: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update ConfigMap in-place: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: 100 update stress test: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: non-nil last-known-good to a new non-nil last-known-good status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap.KubeletConfigKey: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: recover to last-known-good ConfigMap: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
E2eNode Suite [k8s.io] [Feature:DynamicKubeletConfig][NodeFeature:DynamicKubeletConfig][Serial][Disruptive] update Node.Spec.ConfigSource: state transitions: status and events should match expectations
E2eNode Suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]
E2eNode Suite [sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [sig-node] CPU Manager [Serial] [Feature:CPUManager][NodeAlphaFeature:CPUManager] With kubeconfig updated with static CPU Manager policy run the CPU Manager tests should assign CPUs as expected based on the Pod spec
E2eNode Suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
E2eNode Suite [sig-node] ConfigMap should patch ConfigMap successfully
E2eNode Suite [sig-node] ConfigMap should patch ConfigMap successfully
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When all containers in pod are missing should complete pod sandbox clean up based on the information in sandbox checkpoint
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When checkpoint file is corrupted should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] When pod sandbox checkpoint is missing should complete pod sandbox clean up
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should clean up pod sandbox checkpoint after pod deletion
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
E2eNode Suite [sig-node] Dockershim [Serial] [Disruptive] [Feature:Docker][Legacy:Docker] should remove dangling checkpoint file
E2eNode Suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
E2eNode Suite [sig-node] HugePages [Serial] [Feature:HugePages][NodeFeature:HugePages] With config updated with hugepages feature enabled should assign hugepages as expected based on the Pod spec
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Embarrassingly Parallel (EP) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads NAS parallel benchmark (NPB) suite - Integer Sort (IS) workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [sig-node] Node Performance Testing [Serial] [Slow] [Flaky] Run node performance testing with pre-defined workloads TensorFlow workload
E2eNode Suite [sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
E2eNode Suite [sig-node] PodPidsLimit [Serial] [Feature:SupportPodPidsLimit][NodeFeature:SupportPodPidsLimit] With config updated with pids feature enabled should set pids.max for Pod
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 0 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 10 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 105 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [sig-node] Resource-usage [Serial] [Slow] regular resource usage tracking resource tracking for 35 pods per node [Benchmark]
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass
E2eNode Suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
E2eNode Suite [sig-storage] GCP Volumes GlusterFS should be mountable
E2eNode Suite [sig-storage] GCP Volumes GlusterFS should be mountable
E2eNode Suite [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
E2eNode Suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
E2eNode Suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Projected secret Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]
E2eNode Suite [sig-storage] Secrets Should fail non-optional pod creation due to the key in the secret object does not exist [Slow]