Result | FAILURE |
Tests | 9 failed / 634 succeeded |
Started | |
Elapsed | 1h15m |
Revision | |
Builder | gke-prow-ssd-pool-1a225945-p1cl |
links | {u'resultstore': {u'url': u'https://source.cloud.google.com/results/invocations/185245c8-540e-459c-834f-5e91a934f73c/targets/test'}} |
pod | 5bf01467-0695-11ea-b5c7-e216f5392f03 |
resultstore | https://source.cloud.google.com/results/invocations/185245c8-540e-459c-834f-5e91a934f73c/targets/test |
infra-commit | 9e15c062b |
job-version | v1.16.4-beta.0.1+d70a3ca08fe72a-dirty |
pod | 5bf01467-0695-11ea-b5c7-e216f5392f03 |
repo | k8s.io/kubernetes |
repo-commit | d70a3ca08fe72ad8dd0b2d72cf032474ab2ce2a9 |
repos | {u'k8s.io/kubernetes': u'release-1.16', u'sigs.k8s.io/cloud-provider-azure': u'master'} |
revision | v1.16.4-beta.0.1+d70a3ca08fe72a-dirty |
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\shave\san\sterminated\sreason\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:698 Nov 14 04:56:25.085: Timed out after 60.000s. Expected <*errors.errorString | 0xc002169690>: { s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:55:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:55:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:55:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:55:25 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.5 PodIP: PodIPs:[] StartTime:2019-11-14 04:55:25 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8 State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:docker.io/library/busybox:1.29 ImageID: ContainerID: Started:0xc00213934a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } to be nil test/e2e/common/kubelet.go:123from junit_29.xml
[BeforeEach] [k8s.io] Kubelet test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 14 04:55:24.574: INFO: >>> kubeConfig: /workspace/aks287781815/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-1478 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] test/e2e/framework/framework.go:698 Nov 14 04:56:25.085: FAIL: Timed out after 60.000s. Expected <*errors.errorString | 0xc002169690>: { s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:55:25 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:55:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:55:25 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:55:25 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.5 PodIP: PodIPs:[] StartTime:2019-11-14 04:55:25 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8 State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:docker.io/library/busybox:1.29 ImageID: ContainerID: Started:0xc00213934a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } to be nil [AfterEach] [k8s.io] Kubelet test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "kubelet-test-1478". �[1mSTEP�[0m: Found 4 events. Nov 14 04:56:25.141: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8: {default-scheduler } Scheduled: Successfully assigned kubelet-test-1478/bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8 to k8s-agentpool-23171212-vmss000001 Nov 14 04:56:25.141: INFO: At 2019-11-14 04:55:34 +0000 UTC - event for bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "docker.io/library/busybox:1.29" Nov 14 04:56:25.141: INFO: At 2019-11-14 04:56:13 +0000 UTC - event for bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" Nov 14 04:56:25.141: INFO: At 2019-11-14 04:56:18 +0000 UTC - event for bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8 Nov 14 04:56:25.197: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 04:56:25.197: INFO: bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8 k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:55:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:55:25 +0000 UTC ContainersNotReady containers with unready status: [bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:55:25 +0000 UTC ContainersNotReady containers with unready status: [bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:55:25 +0000 UTC }] Nov 14 04:56:25.197: INFO: Nov 14 04:56:25.360: INFO: Logging node info for node k8s-agentpool-23171212-vmss000000 Nov 14 04:56:25.415: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000000 /api/v1/nodes/k8s-agentpool-23171212-vmss000000 0f3bbebc-9d46-4ddd-a1dc-c93db8b52883 31544 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2202":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-8364":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-8403":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-1206":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-2585":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-5498":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-6633":"k8s-agentpool-23171212-vmss000000","csi-mock-csi-mock-volumes-4558":"csi-mock-csi-mock-volumes-4558","csi-mock-csi-mock-volumes-6397":"csi-mock-csi-mock-volumes-6397","csi-mock-csi-mock-volumes-7486":"csi-mock-csi-mock-volumes-7486","csi-mock-csi-mock-volumes-7581":"csi-mock-csi-mock-volumes-7581","csi-mock-csi-mock-volumes-8512":"csi-mock-csi-mock-volumes-8512","csi-mock-csi-mock-volumes-9601":"csi-mock-csi-mock-volumes-9601"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:18 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:18 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:18 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:56:18 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.4,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:359d6aea81114a07a8070169aad06c4a,SystemUUID:A77EC1C1-102D-514B-A3FC-E5E916EF17BD,BootID:fc99ebb5-9bcd-41e5-aad2-849e47da2eea,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c k8s.gcr.io/coredns:1.5.0],SizeBytes:42488424,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:56:25.416: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000000 Nov 14 04:56:25.474: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000000 Nov 14 04:56:25.587: INFO: blobfuse-flexvol-installer-6xhz6 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:56:25.587: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:53:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:56:25.587: INFO: coredns-87f5d796-k7mr9 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container coredns ready: true, restart count 0 Nov 14 04:56:25.587: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:56:25.587: INFO: keyvault-flexvolume-ljqsq started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:56:25.587: INFO: kubernetes-dashboard-65966766b9-b8ps7 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 14 04:56:25.587: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:55:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container agnhost ready: false, restart count 0 Nov 14 04:56:25.587: INFO: csi-snapshotter-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:56:25.587: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:51:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:56:25.587: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:56:25.587: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:56:25.587: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:53:37 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:56:25.587: INFO: azure-ip-masq-agent-dgg69 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:56:25.587: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:56:25.587: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:56:25.587: INFO: pvc-datasource-writer-7rbg4 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container volume-tester ready: false, restart count 0 Nov 14 04:56:25.587: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:56:13 +0000 UTC (0+0 container statuses recorded) Nov 14 04:56:25.587: INFO: rs-gxghl started at 2019-11-14 04:54:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container busybox ready: true, restart count 0 Nov 14 04:56:25.587: INFO: redis-slave-68cd9c48b4-glss4 started at 2019-11-14 04:55:39 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container slave ready: true, restart count 0 Nov 14 04:56:25.587: INFO: kube-proxy-cdq9f started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:56:25.587: INFO: csi-snapshotter-0 started at 2019-11-14 04:53:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:56:25.587: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:55:47 +0000 UTC (0+3 container statuses recorded) Nov 14 04:56:25.587: INFO: Container hostpath ready: false, restart count 0 Nov 14 04:56:25.587: INFO: Container liveness-probe ready: false, restart count 0 Nov 14 04:56:25.587: INFO: Container node-driver-registrar ready: false, restart count 0 Nov 14 04:56:25.587: INFO: exec-volume-test-local-preprovisionedpv-px8m started at 2019-11-14 04:55:40 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container exec-container-local-preprovisionedpv-px8m ready: false, restart count 0 Nov 14 04:56:25.587: INFO: csi-hostpathplugin-0 started at <nil> (0+0 container statuses recorded) Nov 14 04:56:25.587: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:53:37 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:56:25.587: INFO: frontend-79ff456bff-9d685 started at 2019-11-14 04:55:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container php-redis ready: false, restart count 0 Nov 14 04:56:25.587: INFO: netserver-0 started at 2019-11-14 04:55:43 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container webserver ready: false, restart count 0 Nov 14 04:56:25.587: INFO: ss2-1 started at 2019-11-14 04:53:07 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container webserver ready: true, restart count 0 Nov 14 04:56:25.587: INFO: csi-snapshotter-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:56:25.587: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:55:08 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:25.587: INFO: Container agnhost ready: true, restart count 0 W1114 04:56:25.643164 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:56:25.833: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000000 Nov 14 04:56:25.833: INFO: Logging node info for node k8s-agentpool-23171212-vmss000001 Nov 14 04:56:25.889: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000001 /api/v1/nodes/k8s-agentpool-23171212-vmss000001 e9c1f552-b95b-4548-9ecd-37a7f1925e75 31508 0 2019-11-14 04:40:09 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-6971":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3033":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3310":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-4400":"k8s-agentpool-23171212-vmss000001","csi-hostpath-volume-expand-2485":"k8s-agentpool-23171212-vmss000001","csi-mock-csi-mock-volumes-3324":"csi-mock-csi-mock-volumes-3324","csi-mock-csi-mock-volumes-3770":"csi-mock-csi-mock-volumes-3770","csi-mock-csi-mock-volumes-9859":"csi-mock-csi-mock-volumes-9859"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:13 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:13 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:13 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:56:13 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.5,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:639707efd7a74ac4bca6a608e99a6715,SystemUUID:CACA620B-0C7C-7040-A716-91F766CA5A2F,BootID:9fabe02f-4e56-4162-b5c5-2e2733911b4f,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0],SizeBytes:71821984,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 k8s.gcr.io/metrics-server-amd64:v0.2.1],SizeBytes:42541759,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:56:25.890: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000001 Nov 14 04:56:25.948: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000001 Nov 14 04:56:26.012: INFO: netserver-1 started at 2019-11-14 04:55:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container webserver ready: false, restart count 0 Nov 14 04:56:26.012: INFO: replace-1573707240-rjr5h started at 2019-11-14 04:54:02 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container c ready: false, restart count 0 Nov 14 04:56:26.012: INFO: sample-webhook-deployment-86d95b659d-gpxsq started at 2019-11-14 04:55:56 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container sample-webhook ready: false, restart count 0 Nov 14 04:56:26.012: INFO: dns-test-f3d80d91-2590-4870-902c-cc6b474bdbf2 started at 2019-11-14 04:56:16 +0000 UTC (0+3 container statuses recorded) Nov 14 04:56:26.012: INFO: Container jessie-querier ready: false, restart count 0 Nov 14 04:56:26.012: INFO: Container querier ready: false, restart count 0 Nov 14 04:56:26.012: INFO: Container webserver ready: false, restart count 0 Nov 14 04:56:26.012: INFO: metadata-volume-0fda9ab7-be7c-4427-9abd-1f20122fe8f1 started at 2019-11-14 04:55:33 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container client-container ready: false, restart count 0 Nov 14 04:56:26.012: INFO: kube-proxy-ng7z8 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:56:26.012: INFO: redis-slave-68cd9c48b4-pxnkq started at 2019-11-14 04:55:42 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container slave ready: false, restart count 0 Nov 14 04:56:26.012: INFO: termination-message-containerc51b5896-be32-487f-bd02-2dc2a1b418e4 started at 2019-11-14 04:55:55 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container termination-message-container ready: false, restart count 0 Nov 14 04:56:26.012: INFO: ss2-0 started at 2019-11-14 04:52:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container webserver ready: true, restart count 0 Nov 14 04:56:26.012: INFO: ss2-0 started at 2019-11-14 04:54:55 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container webserver ready: false, restart count 0 Nov 14 04:56:26.012: INFO: frontend-79ff456bff-5dq96 started at 2019-11-14 04:55:39 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container php-redis ready: false, restart count 0 Nov 14 04:56:26.012: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:53:08 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:56:26.012: INFO: bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8 started at 2019-11-14 04:55:25 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container bin-falsec4788a1b-c4f8-453e-9f17-2cf69caa89a8 ready: false, restart count 0 Nov 14 04:56:26.012: INFO: nfs-server started at 2019-11-14 04:56:19 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container nfs-server ready: false, restart count 0 Nov 14 04:56:26.012: INFO: pod-with-poststart-http-hook started at 2019-11-14 04:54:19 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container pod-with-poststart-http-hook ready: true, restart count 0 Nov 14 04:56:26.012: INFO: azure-ip-masq-agent-mcg7w started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:56:26.012: INFO: metrics-server-58ff8c5ddf-h7jqs started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container metrics-server ready: true, restart count 0 Nov 14 04:56:26.012: INFO: local-injector started at 2019-11-14 04:53:24 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container local-injector ready: false, restart count 0 Nov 14 04:56:26.012: INFO: pod-handle-http-request started at 2019-11-14 04:53:32 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container pod-handle-http-request ready: true, restart count 0 Nov 14 04:56:26.012: INFO: external-provisioner-86l4g started at 2019-11-14 04:53:36 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 14 04:56:26.012: INFO: termination-message-containera42fa0c6-4e89-4db2-befe-0b268bfcea2a started at 2019-11-14 04:55:05 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container termination-message-container ready: false, restart count 0 Nov 14 04:56:26.012: INFO: blobfuse-flexvol-installer-ktdjj started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:56:26.012: INFO: tiller-deploy-7559b6b885-vkxml started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container tiller ready: true, restart count 0 Nov 14 04:56:26.012: INFO: ss2-2 started at 2019-11-14 04:53:32 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container webserver ready: false, restart count 0 Nov 14 04:56:26.012: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:55:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container agnhost ready: false, restart count 0 Nov 14 04:56:26.012: INFO: rs-jdc4h started at 2019-11-14 04:54:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container busybox ready: true, restart count 0 Nov 14 04:56:26.012: INFO: pod-configmaps-ffb86827-d2ac-4af7-9284-06e52002c841 started at 2019-11-14 04:55:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container configmap-volume-test ready: false, restart count 0 Nov 14 04:56:26.012: INFO: rs-csbbz started at 2019-11-14 04:54:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container busybox ready: true, restart count 0 Nov 14 04:56:26.012: INFO: external-provisioner-7pj8z started at 2019-11-14 04:55:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container nfs-provisioner ready: false, restart count 0 Nov 14 04:56:26.012: INFO: keyvault-flexvolume-2g62m started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:56:26.012: INFO: redis-master-6ff87f4db7-lf6hr started at 2019-11-14 04:55:41 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container master ready: false, restart count 0 Nov 14 04:56:26.012: INFO: downwardapi-volume-02b05637-4cae-4f21-9317-3083e9c1a6af started at 2019-11-14 04:55:46 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container client-container ready: false, restart count 0 Nov 14 04:56:26.012: INFO: without-label started at 2019-11-14 04:56:13 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container without-label ready: false, restart count 0 Nov 14 04:56:26.012: INFO: pod-submit-remove-950f11c5-b5a7-400b-800b-24c5377040ef started at 2019-11-14 04:56:05 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container nginx ready: false, restart count 0 Nov 14 04:56:26.012: INFO: sample-webhook-deployment-86d95b659d-jx6r9 started at 2019-11-14 04:55:48 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container sample-webhook ready: false, restart count 0 Nov 14 04:56:26.012: INFO: frontend-79ff456bff-s8p95 started at 2019-11-14 04:55:40 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:26.012: INFO: Container php-redis ready: false, restart count 0 W1114 04:56:26.068269 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:56:37.416: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000001 Nov 14 04:56:37.416: INFO: Logging node info for node k8s-master-23171212-vmss000000 Nov 14 04:56:37.472: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000000 /api/v1/nodes/k8s-master-23171212-vmss000000 6c9bb7ee-6dcf-4c6d-a8ad-0377f76a60f6 31303 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:55:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:55:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:55:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:55:56 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.4,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:813714caae2d48f4a9036e17505029ae,SystemUUID:A7C76EFE-4E2A-8042-A754-6642A667D859,BootID:245ff6cc-bfb4-4487-ac55-fb3813c9167c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:56:37.472: INFO: Logging kubelet events for node k8s-master-23171212-vmss000000 Nov 14 04:56:37.531: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000000 Nov 14 04:56:37.607: INFO: kube-proxy-cpnbb started at 2019-11-14 04:40:28 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:37.607: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:56:37.607: INFO: kube-scheduler-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:37.607: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:56:37.607: INFO: cloud-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:37.607: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:56:37.607: INFO: kube-addon-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:37.607: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:56:37.607: INFO: kube-apiserver-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:37.607: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:56:37.607: INFO: kube-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:37.607: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:56:37.607: INFO: azure-ip-masq-agent-q7rgb started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:37.607: INFO: Container azure-ip-masq-agent ready: true, restart count 0 W1114 04:56:37.663358 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:56:37.838: INFO: Latency metrics for node k8s-master-23171212-vmss000000 Nov 14 04:56:37.838: INFO: Logging node info for node k8s-master-23171212-vmss000001 Nov 14 04:56:37.893: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000001 /api/v1/nodes/k8s-master-23171212-vmss000001 202620f8-2cc3-4eb6-b880-ef6d6d9fbccd 31320 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.5.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:55:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:55:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:55:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:55:57 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.5,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cafe5635afe4ac8baa078419003bc32,SystemUUID:88981890-9531-334C-9D46-A02D5E4BD18D,BootID:6accdcbe-b0af-4be0-8f82-19833a9a5e2e,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:56:37.893: INFO: Logging kubelet events for node k8s-master-23171212-vmss000001 Nov 14 04:56:37.980: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000001 Nov 14 04:56:38.064: INFO: kube-apiserver-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.064: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:56:38.064: INFO: kube-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.064: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:56:38.064: INFO: azure-ip-masq-agent-dnl49 started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.064: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:56:38.064: INFO: kube-proxy-srv2s started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.064: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:56:38.064: INFO: kube-scheduler-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.064: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:56:38.064: INFO: cloud-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.064: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:56:38.064: INFO: kube-addon-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.064: INFO: Container kube-addon-manager ready: true, restart count 0 W1114 04:56:38.121627 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:56:38.244: INFO: Latency metrics for node k8s-master-23171212-vmss000001 Nov 14 04:56:38.244: INFO: Logging node info for node k8s-master-23171212-vmss000002 Nov 14 04:56:38.300: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000002 /api/v1/nodes/k8s-master-23171212-vmss000002 8eca3a9a-6fd5-4796-82bb-2f37c6fc30b7 31603 0 2019-11-14 04:41:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000002 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.6.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/2,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.6.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284883456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498451456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:41:18 +0000 UTC,LastTransitionTime:2019-11-14 04:41:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:56:26 +0000 UTC,LastTransitionTime:2019-11-14 04:41:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000002,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eb5abe50949445b79334d994c94314f8,SystemUUID:E11F8710-4785-DA42-B98E-8E97145F92C7,BootID:8fe9e9b2-2b16-4895-91c7-dc676b577942,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:56:38.300: INFO: Logging kubelet events for node k8s-master-23171212-vmss000002 Nov 14 04:56:38.359: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000002 Nov 14 04:56:38.437: INFO: kube-scheduler-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.437: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:56:38.437: INFO: cloud-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.437: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:56:38.437: INFO: azure-ip-masq-agent-mw27f started at 2019-11-14 04:41:05 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.437: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:56:38.437: INFO: kube-proxy-4vs6q started at 2019-11-14 04:41:06 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.437: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:56:38.437: INFO: kube-addon-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.437: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:56:38.437: INFO: kube-apiserver-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.438: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:56:38.438: INFO: kube-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.438: INFO: Container kube-controller-manager ready: true, restart count 0 W1114 04:56:38.494092 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:56:38.627: INFO: Latency metrics for node k8s-master-23171212-vmss000002 Nov 14 04:56:38.627: INFO: Logging node info for node k8s-master-23171212-vmss000003 Nov 14 04:56:38.684: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000003 /api/v1/nodes/k8s-master-23171212-vmss000003 b1a400e7-f6ff-4241-9175-cd8bd70dd11a 31307 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000003 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/3,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:55:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:55:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:55:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:55:56 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.7,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000003,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:effe7f682034467995d1db3ee85a4a38,SystemUUID:2073A143-352C-D241-B189-4A1DCC64C62C,BootID:6c95e89b-c056-494f-b817-6494fc9fd635,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:56:38.684: INFO: Logging kubelet events for node k8s-master-23171212-vmss000003 Nov 14 04:56:38.743: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000003 Nov 14 04:56:38.825: INFO: kube-addon-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.825: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:56:38.825: INFO: kube-apiserver-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.825: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:56:38.825: INFO: kube-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.825: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:56:38.825: INFO: kube-scheduler-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.825: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:56:38.825: INFO: azure-ip-masq-agent-4s5bk started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.825: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:56:38.825: INFO: kube-proxy-hrqtx started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.825: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:56:38.825: INFO: cloud-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:38.825: INFO: Container cloud-controller-manager ready: true, restart count 0 W1114 04:56:38.882382 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:56:39.005: INFO: Latency metrics for node k8s-master-23171212-vmss000003 Nov 14 04:56:39.005: INFO: Logging node info for node k8s-master-23171212-vmss000004 Nov 14 04:56:39.061: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000004 /api/v1/nodes/k8s-master-23171212-vmss000004 25a9993c-54fa-45cc-9da7-66c66cafa30f 31348 0 2019-11-14 04:40:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000004 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/4,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:56:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:56:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.8,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000004,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ab6b205a70ea45b1b28b801e68a4ba84,SystemUUID:65406178-5013-644C-AD46-D7BC6F0DD7BF,BootID:e6b05928-9970-49a5-bd51-149982b32750,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:56:39.061: INFO: Logging kubelet events for node k8s-master-23171212-vmss000004 Nov 14 04:56:39.120: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000004 Nov 14 04:56:39.205: INFO: kube-proxy-47vmd started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:39.205: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:56:39.205: INFO: kube-scheduler-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:39.205: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:56:39.205: INFO: cloud-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:39.205: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:56:39.205: INFO: kube-addon-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:39.205: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:56:39.205: INFO: kube-apiserver-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:39.205: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:56:39.205: INFO: kube-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:39.205: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:56:39.205: INFO: azure-ip-masq-agent-47pzk started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:56:39.205: INFO: Container azure-ip-masq-agent ready: true, restart count 0 W1114 04:56:39.261878 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:56:39.393: INFO: Latency metrics for node k8s-master-23171212-vmss000004 Nov 14 04:56:39.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-1478" for this suite. Nov 14 04:56:45.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 14 04:56:47.451: INFO: namespace kubelet-test-1478 deletion completed in 8.000896681s
Find status mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\sKubelet\swhen\sscheduling\sa\sbusybox\scommand\sthat\salways\sfails\sin\sa\spod\sshould\shave\san\sterminated\sreason\s\[NodeConformance\]\s\[Conformance\]$'
test/e2e/framework/framework.go:698 Nov 14 04:54:50.138: Timed out after 60.000s. Expected <*errors.errorString | 0xc0003458e0>: { s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:53:54 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:53:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:53:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:53:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.5 PodIP: PodIPs:[] StartTime:2019-11-14 04:53:54 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:docker.io/library/busybox:1.29 ImageID: ContainerID: Started:0xc0005b0a0a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } to be nil test/e2e/common/kubelet.go:123from junit_29.xml
[BeforeEach] [k8s.io] Kubelet test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 14 04:53:49.576: INFO: >>> kubeConfig: /workspace/aks287781815/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename kubelet-test �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in kubelet-test-3491 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] test/e2e/framework/framework.go:698 Nov 14 04:54:50.138: FAIL: Timed out after 60.000s. Expected <*errors.errorString | 0xc0003458e0>: { s: "expected state to be terminated. Got pod status: {Phase:Pending Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:53:54 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:53:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:53:54 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-11-14 04:53:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:10.248.0.5 PodIP: PodIPs:[] StartTime:2019-11-14 04:53:54 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 State:{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,} Running:nil Terminated:nil} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:docker.io/library/busybox:1.29 ImageID: ContainerID: Started:0xc0005b0a0a}] QOSClass:BestEffort EphemeralContainerStatuses:[]}", } to be nil [AfterEach] [k8s.io] Kubelet test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "kubelet-test-3491". �[1mSTEP�[0m: Found 5 events. Nov 14 04:54:50.193: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80: {default-scheduler } Scheduled: Successfully assigned kubelet-test-3491/bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 to k8s-agentpool-23171212-vmss000001 Nov 14 04:54:50.193: INFO: At 2019-11-14 04:54:31 +0000 UTC - event for bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "docker.io/library/busybox:1.29" Nov 14 04:54:50.193: INFO: At 2019-11-14 04:54:32 +0000 UTC - event for bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "docker.io/library/busybox:1.29" Nov 14 04:54:50.193: INFO: At 2019-11-14 04:54:36 +0000 UTC - event for bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 Nov 14 04:54:50.193: INFO: At 2019-11-14 04:54:41 +0000 UTC - event for bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 Nov 14 04:54:50.249: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 04:54:50.249: INFO: bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:54 +0000 UTC ContainersNotReady containers with unready status: [bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:54 +0000 UTC ContainersNotReady containers with unready status: [bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:50 +0000 UTC }] Nov 14 04:54:50.249: INFO: Nov 14 04:54:50.480: INFO: Logging node info for node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:50.536: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000000 /api/v1/nodes/k8s-agentpool-23171212-vmss000000 0f3bbebc-9d46-4ddd-a1dc-c93db8b52883 29812 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2202":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-8364":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-8403":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-1206":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-2585":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-5498":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-6633":"k8s-agentpool-23171212-vmss000000","csi-mock-csi-mock-volumes-4558":"csi-mock-csi-mock-volumes-4558","csi-mock-csi-mock-volumes-6397":"csi-mock-csi-mock-volumes-6397","csi-mock-csi-mock-volumes-7486":"csi-mock-csi-mock-volumes-7486","csi-mock-csi-mock-volumes-7581":"csi-mock-csi-mock-volumes-7581","csi-mock-csi-mock-volumes-8512":"csi-mock-csi-mock-volumes-8512","csi-mock-csi-mock-volumes-9601":"csi-mock-csi-mock-volumes-9601"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:48 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:48 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:48 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:54:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.4,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:359d6aea81114a07a8070169aad06c4a,SystemUUID:A77EC1C1-102D-514B-A3FC-E5E916EF17BD,BootID:fc99ebb5-9bcd-41e5-aad2-849e47da2eea,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c k8s.gcr.io/coredns:1.5.0],SizeBytes:42488424,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-volume-expand-6633^bb8f80e2-069a-11ea-af09-000d3ac2fa68],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volume-expand-6633^bb8f80e2-069a-11ea-af09-000d3ac2fa68,DevicePath:,},},Config:nil,},} Nov 14 04:54:50.537: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:50.596: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:50.717: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:53:16 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:50.717: INFO: keyvault-flexvolume-ljqsq started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:54:50.717: INFO: kubernetes-dashboard-65966766b9-b8ps7 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:53:16 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:53:36 +0000 UTC (0+3 container statuses recorded) Nov 14 04:54:50.717: INFO: Container hostpath ready: true, restart count 0 Nov 14 04:54:50.717: INFO: Container liveness-probe ready: true, restart count 0 Nov 14 04:54:50.717: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 14 04:54:50.717: INFO: ss2-1 started at 2019-11-14 04:53:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-snapshotter-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:53:37 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:50.717: INFO: azure-ip-masq-agent-dgg69 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:50.717: INFO: pvc-datasource-writer-7rbg4 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container volume-tester ready: false, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:51:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:50.717: INFO: pod-subpath-test-hostpathsymlink-v8l2 started at 2019-11-14 04:54:34 +0000 UTC (2+2 container statuses recorded) Nov 14 04:54:50.717: INFO: Init container init-volume-hostpathsymlink-v8l2 ready: true, restart count 0 Nov 14 04:54:50.717: INFO: Init container test-init-subpath-hostpathsymlink-v8l2 ready: true, restart count 0 Nov 14 04:54:50.717: INFO: Container test-container-subpath-hostpathsymlink-v8l2 ready: false, restart count 0 Nov 14 04:54:50.717: INFO: Container test-container-volume-hostpathsymlink-v8l2 ready: false, restart count 0 Nov 14 04:54:50.717: INFO: kube-proxy-cdq9f started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-snapshotter-0 started at 2019-11-14 04:53:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:50.717: INFO: security-context-06568e16-f019-4982-a45b-c9957222ee01 started at 2019-11-14 04:53:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container write-pod ready: true, restart count 0 Nov 14 04:54:50.717: INFO: ss2-1 started at 2019-11-14 04:53:07 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-snapshotter-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:53:37 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:54:43 +0000 UTC (0+0 container statuses recorded) Nov 14 04:54:50.717: INFO: pod-subpath-test-local-preprovisionedpv-2mrx started at 2019-11-14 04:54:23 +0000 UTC (2+2 container statuses recorded) Nov 14 04:54:50.717: INFO: Init container init-volume-local-preprovisionedpv-2mrx ready: true, restart count 0 Nov 14 04:54:50.717: INFO: Init container test-init-subpath-local-preprovisionedpv-2mrx ready: true, restart count 0 Nov 14 04:54:50.717: INFO: Container test-container-subpath-local-preprovisionedpv-2mrx ready: false, restart count 0 Nov 14 04:54:50.717: INFO: Container test-container-volume-local-preprovisionedpv-2mrx ready: false, restart count 0 Nov 14 04:54:50.717: INFO: blobfuse-flexvol-installer-6xhz6 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:54:50.717: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:53:47 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:50.717: INFO: coredns-87f5d796-k7mr9 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container coredns ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:53:17 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-snapshotter-0 started at 2019-11-14 04:53:17 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:54:40 +0000 UTC (0+0 container statuses recorded) Nov 14 04:54:50.717: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:53:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:50.717: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:50.717: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:54:43 +0000 UTC (0+0 container statuses recorded) W1114 04:54:50.773762 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:50.922: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:50.922: INFO: Logging node info for node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:50.978: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000001 /api/v1/nodes/k8s-agentpool-23171212-vmss000001 e9c1f552-b95b-4548-9ecd-37a7f1925e75 29710 0 2019-11-14 04:40:09 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-6971":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3033":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3310":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-4400":"k8s-agentpool-23171212-vmss000001","csi-hostpath-volume-expand-2485":"k8s-agentpool-23171212-vmss000001","csi-mock-csi-mock-volumes-3324":"csi-mock-csi-mock-volumes-3324","csi-mock-csi-mock-volumes-3770":"csi-mock-csi-mock-volumes-3770","csi-mock-csi-mock-volumes-9859":"csi-mock-csi-mock-volumes-9859"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},example.com/fakecpu: {{800 0} {<nil>} 800 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},example.com/fakecpu: {{800 0} {<nil>} 800 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:54:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.5,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:639707efd7a74ac4bca6a608e99a6715,SystemUUID:CACA620B-0C7C-7040-A716-91F766CA5A2F,BootID:9fabe02f-4e56-4162-b5c5-2e2733911b4f,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0],SizeBytes:71821984,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 k8s.gcr.io/metrics-server-amd64:v0.2.1],SizeBytes:42541759,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:50.978: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:51.038: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:51.125: INFO: downwardapi-volume-aa91b37f-436b-4bfe-9322-393bf1619731 started at 2019-11-14 04:54:21 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.125: INFO: Container client-container ready: false, restart count 0 Nov 14 04:54:51.125: INFO: ss2-0 started at 2019-11-14 04:52:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.125: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:51.125: INFO: ss2-0 started at 2019-11-14 04:53:12 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.125: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:51.125: INFO: kube-proxy-ng7z8 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.125: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:51.125: INFO: downward-api-f7a2bc99-e044-4176-a95e-80890fa852c7 started at 2019-11-14 04:54:18 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.125: INFO: Container dapi-container ready: false, restart count 0 Nov 14 04:54:51.125: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:53:08 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.125: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:51.125: INFO: pod-subpath-test-configmap-8t8x started at 2019-11-14 04:53:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.125: INFO: Container test-container-subpath-configmap-8t8x ready: true, restart count 0 Nov 14 04:54:51.126: INFO: busybox-host-aliasese1468a1f-ed82-40e1-ac46-33c91b10f88b started at 2019-11-14 04:53:23 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container busybox-host-aliasese1468a1f-ed82-40e1-ac46-33c91b10f88b ready: true, restart count 0 Nov 14 04:54:51.126: INFO: external-provisioner-psrp2 started at 2019-11-14 04:51:42 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container nfs-provisioner ready: false, restart count 0 Nov 14 04:54:51.126: INFO: local-injector started at 2019-11-14 04:53:24 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container local-injector ready: true, restart count 0 Nov 14 04:54:51.126: INFO: pod-secrets-43072a86-22c1-4f43-af43-52a8e723aac1 started at 2019-11-14 04:52:16 +0000 UTC (0+3 container statuses recorded) Nov 14 04:54:51.126: INFO: Container creates-volume-test ready: true, restart count 0 Nov 14 04:54:51.126: INFO: Container dels-volume-test ready: true, restart count 0 Nov 14 04:54:51.126: INFO: Container upds-volume-test ready: true, restart count 0 Nov 14 04:54:51.126: INFO: pod-with-poststart-http-hook started at 2019-11-14 04:54:19 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container pod-with-poststart-http-hook ready: false, restart count 0 Nov 14 04:54:51.126: INFO: azure-ip-masq-agent-mcg7w started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:51.126: INFO: metrics-server-58ff8c5ddf-h7jqs started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container metrics-server ready: true, restart count 0 Nov 14 04:54:51.126: INFO: pod-handle-http-request started at 2019-11-14 04:53:32 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container pod-handle-http-request ready: true, restart count 0 Nov 14 04:54:51.126: INFO: external-provisioner-86l4g started at 2019-11-14 04:53:36 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 14 04:54:51.126: INFO: ss2-2 started at 2019-11-14 04:53:32 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:51.126: INFO: pod-subpath-test-local-preprovisionedpv-ptqj started at 2019-11-14 04:53:39 +0000 UTC (2+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Init container init-volume-local-preprovisionedpv-ptqj ready: true, restart count 0 Nov 14 04:54:51.126: INFO: Init container test-init-volume-local-preprovisionedpv-ptqj ready: false, restart count 0 Nov 14 04:54:51.126: INFO: Container test-container-subpath-local-preprovisionedpv-ptqj ready: false, restart count 0 Nov 14 04:54:51.126: INFO: ss2-2 started at 2019-11-14 04:53:41 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:51.126: INFO: blobfuse-flexvol-installer-ktdjj started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:54:51.126: INFO: tiller-deploy-7559b6b885-vkxml started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container tiller ready: true, restart count 0 Nov 14 04:54:51.126: INFO: rs-pod1-h6c77 started at 2019-11-14 04:53:48 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:51.126: INFO: rs-pod1-zkjdq started at 2019-11-14 04:53:46 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:51.126: INFO: bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 started at 2019-11-14 04:53:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 ready: false, restart count 0 Nov 14 04:54:51.126: INFO: metadata-volume-c84ae3d5-97a5-4cb7-8fe3-5d5d666a05da started at 2019-11-14 04:54:00 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container client-container ready: false, restart count 0 Nov 14 04:54:51.126: INFO: keyvault-flexvolume-2g62m started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:54:51.126: INFO: rs-pod1-qbt2h started at 2019-11-14 04:53:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container pod1 ready: true, restart count 0 Nov 14 04:54:51.126: INFO: rs-pod1-qvw5b started at 2019-11-14 04:53:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container pod1 ready: true, restart count 0 Nov 14 04:54:51.126: INFO: metadata-volume-81f3141a-e2db-4574-9386-0df8ae75e38d started at 2019-11-14 04:54:00 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container client-container ready: false, restart count 0 Nov 14 04:54:51.126: INFO: rs-pod1-6rq9f started at 2019-11-14 04:53:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container pod1 ready: true, restart count 0 Nov 14 04:54:51.126: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:53:18 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:51.126: INFO: pod-configmaps-0f0e6626-21fa-4202-9d8d-a7085374f1eb started at 2019-11-14 04:54:23 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container configmap-volume-test ready: false, restart count 0 Nov 14 04:54:51.126: INFO: exec-volume-test-nfs-dynamicpv-9f4x started at <nil> (0+0 container statuses recorded) Nov 14 04:54:51.126: INFO: pod-1c0b5786-d6cf-411c-b1ec-0ca9fade1994 started at 2019-11-14 04:53:55 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container test-container ready: false, restart count 0 Nov 14 04:54:51.126: INFO: replace-1573707240-rjr5h started at 2019-11-14 04:54:02 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:51.126: INFO: Container c ready: false, restart count 0 W1114 04:54:51.183441 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:52.492: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:52.492: INFO: Logging node info for node k8s-master-23171212-vmss000000 Nov 14 04:54:52.547: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000000 /api/v1/nodes/k8s-master-23171212-vmss000000 6c9bb7ee-6dcf-4c6d-a8ad-0377f76a60f6 29063 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.4,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:813714caae2d48f4a9036e17505029ae,SystemUUID:A7C76EFE-4E2A-8042-A754-6642A667D859,BootID:245ff6cc-bfb4-4487-ac55-fb3813c9167c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:52.548: INFO: Logging kubelet events for node k8s-master-23171212-vmss000000 Nov 14 04:54:52.606: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000000 Nov 14 04:54:52.683: INFO: kube-addon-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:52.683: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:52.683: INFO: kube-apiserver-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:52.683: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:52.683: INFO: kube-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:52.683: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:52.683: INFO: azure-ip-masq-agent-q7rgb started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:52.683: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:52.683: INFO: kube-proxy-cpnbb started at 2019-11-14 04:40:28 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:52.683: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:52.683: INFO: kube-scheduler-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:52.683: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:52.683: INFO: cloud-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:52.683: INFO: Container cloud-controller-manager ready: true, restart count 0 W1114 04:54:52.744538 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:52.868: INFO: Latency metrics for node k8s-master-23171212-vmss000000 Nov 14 04:54:52.869: INFO: Logging node info for node k8s-master-23171212-vmss000001 Nov 14 04:54:52.923: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000001 /api/v1/nodes/k8s-master-23171212-vmss000001 202620f8-2cc3-4eb6-b880-ef6d6d9fbccd 29086 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.5.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.5,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cafe5635afe4ac8baa078419003bc32,SystemUUID:88981890-9531-334C-9D46-A02D5E4BD18D,BootID:6accdcbe-b0af-4be0-8f82-19833a9a5e2e,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:52.924: INFO: Logging kubelet events for node k8s-master-23171212-vmss000001 Nov 14 04:54:52.984: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000001 Nov 14 04:54:53.065: INFO: kube-apiserver-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.065: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:53.065: INFO: kube-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.065: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:53.065: INFO: azure-ip-masq-agent-dnl49 started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.065: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:53.065: INFO: kube-proxy-srv2s started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.065: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:53.065: INFO: kube-scheduler-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.065: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:53.065: INFO: cloud-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.065: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:53.065: INFO: kube-addon-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.065: INFO: Container kube-addon-manager ready: true, restart count 0 W1114 04:54:53.122358 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:53.248: INFO: Latency metrics for node k8s-master-23171212-vmss000001 Nov 14 04:54:53.248: INFO: Logging node info for node k8s-master-23171212-vmss000002 Nov 14 04:54:53.303: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000002 /api/v1/nodes/k8s-master-23171212-vmss000002 8eca3a9a-6fd5-4796-82bb-2f37c6fc30b7 29539 0 2019-11-14 04:41:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000002 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.6.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/2,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.6.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284883456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498451456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:41:18 +0000 UTC,LastTransitionTime:2019-11-14 04:41:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:54:26 +0000 UTC,LastTransitionTime:2019-11-14 04:41:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000002,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eb5abe50949445b79334d994c94314f8,SystemUUID:E11F8710-4785-DA42-B98E-8E97145F92C7,BootID:8fe9e9b2-2b16-4895-91c7-dc676b577942,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:53.303: INFO: Logging kubelet events for node k8s-master-23171212-vmss000002 Nov 14 04:54:53.368: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000002 Nov 14 04:54:53.456: INFO: kube-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.456: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:53.456: INFO: kube-scheduler-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.456: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:53.456: INFO: cloud-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.456: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:53.456: INFO: azure-ip-masq-agent-mw27f started at 2019-11-14 04:41:05 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.456: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:53.456: INFO: kube-proxy-4vs6q started at 2019-11-14 04:41:06 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.456: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:53.456: INFO: kube-addon-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.456: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:53.456: INFO: kube-apiserver-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.456: INFO: Container kube-apiserver ready: true, restart count 0 W1114 04:54:53.522415 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:53.663: INFO: Latency metrics for node k8s-master-23171212-vmss000002 Nov 14 04:54:53.663: INFO: Logging node info for node k8s-master-23171212-vmss000003 Nov 14 04:54:53.719: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000003 /api/v1/nodes/k8s-master-23171212-vmss000003 b1a400e7-f6ff-4241-9175-cd8bd70dd11a 29068 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000003 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/3,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.7,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000003,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:effe7f682034467995d1db3ee85a4a38,SystemUUID:2073A143-352C-D241-B189-4A1DCC64C62C,BootID:6c95e89b-c056-494f-b817-6494fc9fd635,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:53.720: INFO: Logging kubelet events for node k8s-master-23171212-vmss000003 Nov 14 04:54:53.852: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000003 Nov 14 04:54:53.941: INFO: kube-apiserver-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.941: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:53.941: INFO: kube-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.942: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:53.942: INFO: kube-scheduler-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.942: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:53.942: INFO: azure-ip-masq-agent-4s5bk started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.942: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:53.942: INFO: kube-proxy-hrqtx started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.942: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:53.942: INFO: cloud-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.942: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:53.942: INFO: kube-addon-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:53.942: INFO: Container kube-addon-manager ready: true, restart count 0 W1114 04:54:53.997958 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:54.119: INFO: Latency metrics for node k8s-master-23171212-vmss000003 Nov 14 04:54:54.119: INFO: Logging node info for node k8s-master-23171212-vmss000004 Nov 14 04:54:54.177: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000004 /api/v1/nodes/k8s-master-23171212-vmss000004 25a9993c-54fa-45cc-9da7-66c66cafa30f 29165 0 2019-11-14 04:40:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000004 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/4,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.8,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000004,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ab6b205a70ea45b1b28b801e68a4ba84,SystemUUID:65406178-5013-644C-AD46-D7BC6F0DD7BF,BootID:e6b05928-9970-49a5-bd51-149982b32750,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:54.178: INFO: Logging kubelet events for node k8s-master-23171212-vmss000004 Nov 14 04:54:54.237: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000004 Nov 14 04:54:54.317: INFO: kube-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:54.317: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:54.317: INFO: azure-ip-masq-agent-47pzk started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:54.317: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:54.317: INFO: kube-proxy-47vmd started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:54.317: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:54.317: INFO: kube-scheduler-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:54.317: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:54.317: INFO: cloud-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:54.317: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:54.317: INFO: kube-addon-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:54.317: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:54.317: INFO: kube-apiserver-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:54.317: INFO: Container kube-apiserver ready: true, restart count 0 W1114 04:54:54.374440 92623 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:54.525: INFO: Latency metrics for node k8s-master-23171212-vmss000004 Nov 14 04:54:54.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "kubelet-test-3491" for this suite. Nov 14 04:55:22.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 14 04:55:24.571: INFO: namespace kubelet-test-3491 deletion completed in 29.990321566s
Find status mentions in log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sDelete\sGrace\sPeriod\sshould\sbe\ssubmitted\sand\sremoved\s\[Conformance\]$'
test/e2e/framework/framework.go:698 Nov 14 04:53:06.718: kubelet never observed the termination notice Unexpected error: <*errors.errorString | 0xc0000d5090>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred test/e2e/node/pods.go:163from junit_12.xml
[BeforeEach] [k8s.io] [sig-node] Pods Extended test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 14 04:52:21.324: INFO: >>> kubeConfig: /workspace/aks287781815/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-3367 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] test/e2e/framework/framework.go:698 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: setting up selector �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying the pod is in kubernetes Nov 14 04:52:36.147: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks287781815/kubeconfig/kubeconfig.westus2.json proxy -p 0' �[1mSTEP�[0m: deleting the pod gracefully �[1mSTEP�[0m: verifying the kubelet observed the termination notice Nov 14 04:53:06.718: FAIL: kubelet never observed the termination notice Unexpected error: <*errors.errorString | 0xc0000d5090>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred [AfterEach] [k8s.io] [sig-node] Pods Extended test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pods-3367". �[1mSTEP�[0m: Found 6 events. Nov 14 04:53:06.774: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-submit-remove-0a013bbe-e770-41fd-8009-62316ed5a7ea: {default-scheduler } Scheduled: Successfully assigned pods-3367/pod-submit-remove-0a013bbe-e770-41fd-8009-62316ed5a7ea to k8s-agentpool-23171212-vmss000001 Nov 14 04:53:06.774: INFO: At 2019-11-14 04:52:24 +0000 UTC - event for pod-submit-remove-0a013bbe-e770-41fd-8009-62316ed5a7ea: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "docker.io/library/nginx:1.14-alpine" Nov 14 04:53:06.774: INFO: At 2019-11-14 04:52:27 +0000 UTC - event for pod-submit-remove-0a013bbe-e770-41fd-8009-62316ed5a7ea: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "docker.io/library/nginx:1.14-alpine" Nov 14 04:53:06.774: INFO: At 2019-11-14 04:52:28 +0000 UTC - event for pod-submit-remove-0a013bbe-e770-41fd-8009-62316ed5a7ea: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container nginx Nov 14 04:53:06.774: INFO: At 2019-11-14 04:52:29 +0000 UTC - event for pod-submit-remove-0a013bbe-e770-41fd-8009-62316ed5a7ea: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container nginx Nov 14 04:53:06.774: INFO: At 2019-11-14 04:52:37 +0000 UTC - event for pod-submit-remove-0a013bbe-e770-41fd-8009-62316ed5a7ea: {kubelet k8s-agentpool-23171212-vmss000001} Killing: Stopping container nginx Nov 14 04:53:06.829: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 04:53:06.829: INFO: pod-submit-remove-0a013bbe-e770-41fd-8009-62316ed5a7ea k8s-agentpool-23171212-vmss000001 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:52:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:52:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:52:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:52:21 +0000 UTC }] Nov 14 04:53:06.829: INFO: Nov 14 04:53:06.947: INFO: Logging node info for node k8s-agentpool-23171212-vmss000000 Nov 14 04:53:07.008: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000000 /api/v1/nodes/k8s-agentpool-23171212-vmss000000 0f3bbebc-9d46-4ddd-a1dc-c93db8b52883 26882 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2202":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-8364":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-8403":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-1206":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-2585":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-5498":"k8s-agentpool-23171212-vmss000000","csi-mock-csi-mock-volumes-4558":"csi-mock-csi-mock-volumes-4558","csi-mock-csi-mock-volumes-6397":"csi-mock-csi-mock-volumes-6397","csi-mock-csi-mock-volumes-7486":"csi-mock-csi-mock-volumes-7486","csi-mock-csi-mock-volumes-7581":"csi-mock-csi-mock-volumes-7581","csi-mock-csi-mock-volumes-8512":"csi-mock-csi-mock-volumes-8512","csi-mock-csi-mock-volumes-9601":"csi-mock-csi-mock-volumes-9601"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:52:57 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.4,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:359d6aea81114a07a8070169aad06c4a,SystemUUID:A77EC1C1-102D-514B-A3FC-E5E916EF17BD,BootID:fc99ebb5-9bcd-41e5-aad2-849e47da2eea,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c k8s.gcr.io/coredns:1.5.0],SizeBytes:42488424,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-volume-expand-5498^75f07ffb-069a-11ea-b1fa-000d3ac2fa68],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volume-expand-5498^75f07ffb-069a-11ea-b1fa-000d3ac2fa68,DevicePath:,},},Config:nil,},} Nov 14 04:53:07.009: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000000 Nov 14 04:53:07.068: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000000 Nov 14 04:53:07.159: INFO: ss2-1 started at 2019-11-14 04:52:40 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container webserver ready: true, restart count 0 Nov 14 04:53:07.159: INFO: csi-snapshotter-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:53:07.159: INFO: security-context-474ad7d7-bb43-4d44-9979-6c9894892a7f started at 2019-11-14 04:52:59 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container write-pod ready: true, restart count 0 Nov 14 04:53:07.159: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:53:07.159: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:53:07.159: INFO: azure-ip-masq-agent-dgg69 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:53:07.159: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:53:07.159: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:53:07.159: INFO: pvc-datasource-writer-7rbg4 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container volume-tester ready: false, restart count 0 Nov 14 04:53:07.159: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:51:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:53:07.159: INFO: kube-proxy-cdq9f started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:53:07.159: INFO: local-client started at 2019-11-14 04:52:30 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container local-client ready: false, restart count 0 Nov 14 04:53:07.159: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:52:41 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:53:07.159: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:51:38 +0000 UTC (0+3 container statuses recorded) Nov 14 04:53:07.159: INFO: Container hostpath ready: true, restart count 0 Nov 14 04:53:07.159: INFO: Container liveness-probe ready: true, restart count 0 Nov 14 04:53:07.159: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 14 04:53:07.159: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:52:43 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:53:07.159: INFO: csi-snapshotter-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:53:07.159: INFO: security-context-85609705-72a9-44b3-a27c-18c88905d90a started at 2019-11-14 04:52:56 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container write-pod ready: true, restart count 0 Nov 14 04:53:07.159: INFO: blobfuse-flexvol-installer-6xhz6 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:53:07.159: INFO: security-context-d57992e5-1692-467c-8ed3-5712eaebb33a started at 2019-11-14 04:52:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.159: INFO: Container write-pod ready: false, restart count 0 Nov 14 04:53:07.159: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:50:39 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:53:07.160: INFO: coredns-87f5d796-k7mr9 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Container coredns ready: true, restart count 0 Nov 14 04:53:07.160: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:52:53 +0000 UTC (0+0 container statuses recorded) Nov 14 04:53:07.160: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:51:10 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:53:07.160: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:50:39 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:53:07.160: INFO: csi-snapshotter-0 started at 2019-11-14 04:50:39 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:53:07.160: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:50:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:53:07.160: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:52:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:53:07.160: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:53:02 +0000 UTC (0+0 container statuses recorded) Nov 14 04:53:07.160: INFO: keyvault-flexvolume-ljqsq started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:53:07.160: INFO: kubernetes-dashboard-65966766b9-b8ps7 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 14 04:53:07.160: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:53:02 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Container agnhost ready: false, restart count 0 Nov 14 04:53:07.160: INFO: pod-subpath-test-nfs-dynamicpv-h6ws started at 2019-11-14 04:52:24 +0000 UTC (1+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Init container init-volume-nfs-dynamicpv-h6ws ready: true, restart count 0 Nov 14 04:53:07.160: INFO: Container test-container-subpath-nfs-dynamicpv-h6ws ready: false, restart count 0 Nov 14 04:53:07.160: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:07.160: INFO: Container csi-provisioner ready: true, restart count 0 W1114 04:53:07.216142 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:53:07.950: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000000 Nov 14 04:53:07.950: INFO: Logging node info for node k8s-agentpool-23171212-vmss000001 Nov 14 04:53:08.006: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000001 /api/v1/nodes/k8s-agentpool-23171212-vmss000001 e9c1f552-b95b-4548-9ecd-37a7f1925e75 26473 0 2019-11-14 04:40:09 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-6971":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3033":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3310":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-4400":"k8s-agentpool-23171212-vmss000001","csi-hostpath-volume-expand-2485":"k8s-agentpool-23171212-vmss000001","csi-mock-csi-mock-volumes-3324":"csi-mock-csi-mock-volumes-3324","csi-mock-csi-mock-volumes-3770":"csi-mock-csi-mock-volumes-3770","csi-mock-csi-mock-volumes-9859":"csi-mock-csi-mock-volumes-9859"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:52:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.5,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:639707efd7a74ac4bca6a608e99a6715,SystemUUID:CACA620B-0C7C-7040-A716-91F766CA5A2F,BootID:9fabe02f-4e56-4162-b5c5-2e2733911b4f,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0],SizeBytes:71821984,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 k8s.gcr.io/metrics-server-amd64:v0.2.1],SizeBytes:42541759,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:53:08.007: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000001 Nov 14 04:53:08.066: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000001 Nov 14 04:53:08.181: INFO: kube-proxy-ng7z8 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.181: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:53:08.181: INFO: e2e-test-httpd-rc-c4z5q started at 2019-11-14 04:52:43 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.181: INFO: Container e2e-test-httpd-rc ready: true, restart count 0 Nov 14 04:53:08.181: INFO: dns-test-34255698-c34e-4080-8adc-754530cc1503 started at 2019-11-14 04:52:57 +0000 UTC (0+3 container statuses recorded) Nov 14 04:53:08.181: INFO: Container jessie-querier ready: true, restart count 0 Nov 14 04:53:08.181: INFO: Container querier ready: true, restart count 0 Nov 14 04:53:08.181: INFO: Container webserver ready: true, restart count 0 Nov 14 04:53:08.181: INFO: ss2-0 started at 2019-11-14 04:52:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.181: INFO: Container webserver ready: true, restart count 0 Nov 14 04:53:08.181: INFO: external-provisioner-psrp2 started at 2019-11-14 04:51:42 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.181: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 14 04:53:08.181: INFO: pod-secrets-43072a86-22c1-4f43-af43-52a8e723aac1 started at 2019-11-14 04:52:16 +0000 UTC (0+3 container statuses recorded) Nov 14 04:53:08.181: INFO: Container creates-volume-test ready: true, restart count 0 Nov 14 04:53:08.181: INFO: Container dels-volume-test ready: true, restart count 0 Nov 14 04:53:08.181: INFO: Container upds-volume-test ready: true, restart count 0 Nov 14 04:53:08.181: INFO: nfs-server started at 2019-11-14 04:51:59 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.181: INFO: Container nfs-server ready: true, restart count 0 Nov 14 04:53:08.181: INFO: azure-ip-masq-agent-mcg7w started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.181: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:53:08.181: INFO: metrics-server-58ff8c5ddf-h7jqs started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.181: INFO: Container metrics-server ready: true, restart count 0 Nov 14 04:53:08.181: INFO: liveness-7bade2fa-8e64-4f6d-9649-51e1d1a6d745 started at 2019-11-14 04:49:59 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.181: INFO: Container liveness ready: true, restart count 0 Nov 14 04:53:08.181: INFO: blobfuse-flexvol-installer-ktdjj started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.181: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:53:08.182: INFO: tiller-deploy-7559b6b885-vkxml started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.182: INFO: Container tiller ready: true, restart count 0 Nov 14 04:53:08.182: INFO: ss2-2 started at 2019-11-14 04:52:57 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.182: INFO: Container webserver ready: true, restart count 0 Nov 14 04:53:08.182: INFO: external-provisioner-wmbtv started at 2019-11-14 04:53:03 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.182: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 14 04:53:08.182: INFO: pod-subpath-test-hostpath-lbmc started at 2019-11-14 04:52:43 +0000 UTC (2+1 container statuses recorded) Nov 14 04:53:08.182: INFO: Init container init-volume-hostpath-lbmc ready: true, restart count 0 Nov 14 04:53:08.182: INFO: Init container test-init-volume-hostpath-lbmc ready: true, restart count 0 Nov 14 04:53:08.182: INFO: Container test-container-subpath-hostpath-lbmc ready: false, restart count 0 Nov 14 04:53:08.182: INFO: keyvault-flexvolume-2g62m started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.182: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:53:08.182: INFO: e2e-test-httpd-rc-9e04a2c24befd64d3e6c0c5a42b9268f-nq47q started at 2019-11-14 04:52:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.182: INFO: Container e2e-test-httpd-rc ready: true, restart count 0 Nov 14 04:53:08.182: INFO: ss2-0 started at 2019-11-14 04:52:22 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.182: INFO: Container webserver ready: true, restart count 0 Nov 14 04:53:08.182: INFO: external-provisioner-w7gjj started at 2019-11-14 04:52:09 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.182: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 14 04:53:08.182: INFO: nfs-client started at 2019-11-14 04:53:00 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.182: INFO: Container nfs-client ready: true, restart count 0 Nov 14 04:53:08.182: INFO: downwardapi-volume-475a08a8-4b24-46b6-a7c8-1e67fcec1ff8 started at 2019-11-14 04:53:07 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:08.182: INFO: Container client-container ready: false, restart count 0 W1114 04:53:08.238385 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:53:09.179: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000001 Nov 14 04:53:09.179: INFO: Logging node info for node k8s-master-23171212-vmss000000 Nov 14 04:53:09.239: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000000 /api/v1/nodes/k8s-master-23171212-vmss000000 6c9bb7ee-6dcf-4c6d-a8ad-0377f76a60f6 26796 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:52:55 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.4,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:813714caae2d48f4a9036e17505029ae,SystemUUID:A7C76EFE-4E2A-8042-A754-6642A667D859,BootID:245ff6cc-bfb4-4487-ac55-fb3813c9167c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:53:09.239: INFO: Logging kubelet events for node k8s-master-23171212-vmss000000 Nov 14 04:53:09.298: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000000 Nov 14 04:53:09.382: INFO: azure-ip-masq-agent-q7rgb started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.382: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:53:09.382: INFO: kube-proxy-cpnbb started at 2019-11-14 04:40:28 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.382: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:53:09.382: INFO: kube-scheduler-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.382: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:53:09.382: INFO: cloud-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.382: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:53:09.382: INFO: kube-addon-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.382: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:53:09.382: INFO: kube-apiserver-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.382: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:53:09.382: INFO: kube-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.382: INFO: Container kube-controller-manager ready: true, restart count 0 W1114 04:53:09.441445 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:53:09.580: INFO: Latency metrics for node k8s-master-23171212-vmss000000 Nov 14 04:53:09.580: INFO: Logging node info for node k8s-master-23171212-vmss000001 Nov 14 04:53:09.635: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000001 /api/v1/nodes/k8s-master-23171212-vmss000001 202620f8-2cc3-4eb6-b880-ef6d6d9fbccd 26843 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.5.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:52:57 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.5,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cafe5635afe4ac8baa078419003bc32,SystemUUID:88981890-9531-334C-9D46-A02D5E4BD18D,BootID:6accdcbe-b0af-4be0-8f82-19833a9a5e2e,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:53:09.635: INFO: Logging kubelet events for node k8s-master-23171212-vmss000001 Nov 14 04:53:09.694: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000001 Nov 14 04:53:09.774: INFO: kube-apiserver-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.774: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:53:09.774: INFO: kube-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.774: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:53:09.774: INFO: azure-ip-masq-agent-dnl49 started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.774: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:53:09.774: INFO: kube-proxy-srv2s started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.774: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:53:09.774: INFO: kube-scheduler-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.774: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:53:09.774: INFO: cloud-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.774: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:53:09.774: INFO: kube-addon-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:09.774: INFO: Container kube-addon-manager ready: true, restart count 0 W1114 04:53:09.867371 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:53:10.000: INFO: Latency metrics for node k8s-master-23171212-vmss000001 Nov 14 04:53:10.000: INFO: Logging node info for node k8s-master-23171212-vmss000002 Nov 14 04:53:10.055: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000002 /api/v1/nodes/k8s-master-23171212-vmss000002 8eca3a9a-6fd5-4796-82bb-2f37c6fc30b7 25932 0 2019-11-14 04:41:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000002 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.6.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/2,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.6.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284883456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498451456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:41:18 +0000 UTC,LastTransitionTime:2019-11-14 04:41:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:52:26 +0000 UTC,LastTransitionTime:2019-11-14 04:41:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000002,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eb5abe50949445b79334d994c94314f8,SystemUUID:E11F8710-4785-DA42-B98E-8E97145F92C7,BootID:8fe9e9b2-2b16-4895-91c7-dc676b577942,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:53:10.055: INFO: Logging kubelet events for node k8s-master-23171212-vmss000002 Nov 14 04:53:10.115: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000002 Nov 14 04:53:10.194: INFO: cloud-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.194: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:53:10.194: INFO: azure-ip-masq-agent-mw27f started at 2019-11-14 04:41:05 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.194: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:53:10.194: INFO: kube-proxy-4vs6q started at 2019-11-14 04:41:06 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.194: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:53:10.194: INFO: kube-addon-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.194: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:53:10.194: INFO: kube-apiserver-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.194: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:53:10.194: INFO: kube-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.194: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:53:10.194: INFO: kube-scheduler-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.194: INFO: Container kube-scheduler ready: true, restart count 0 W1114 04:53:10.251406 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:53:10.382: INFO: Latency metrics for node k8s-master-23171212-vmss000002 Nov 14 04:53:10.382: INFO: Logging node info for node k8s-master-23171212-vmss000003 Nov 14 04:53:10.437: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000003 /api/v1/nodes/k8s-master-23171212-vmss000003 b1a400e7-f6ff-4241-9175-cd8bd70dd11a 26798 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000003 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/3,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:52:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:52:55 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.7,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000003,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:effe7f682034467995d1db3ee85a4a38,SystemUUID:2073A143-352C-D241-B189-4A1DCC64C62C,BootID:6c95e89b-c056-494f-b817-6494fc9fd635,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:53:10.437: INFO: Logging kubelet events for node k8s-master-23171212-vmss000003 Nov 14 04:53:10.495: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000003 Nov 14 04:53:10.574: INFO: cloud-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.574: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:53:10.574: INFO: kube-addon-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.574: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:53:10.574: INFO: kube-apiserver-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.574: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:53:10.574: INFO: kube-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.574: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:53:10.574: INFO: kube-scheduler-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.574: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:53:10.574: INFO: azure-ip-masq-agent-4s5bk started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.574: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:53:10.574: INFO: kube-proxy-hrqtx started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.574: INFO: Container kube-proxy ready: true, restart count 0 W1114 04:53:10.630888 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:53:10.763: INFO: Latency metrics for node k8s-master-23171212-vmss000003 Nov 14 04:53:10.763: INFO: Logging node info for node k8s-master-23171212-vmss000004 Nov 14 04:53:10.818: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000004 /api/v1/nodes/k8s-master-23171212-vmss000004 25a9993c-54fa-45cc-9da7-66c66cafa30f 26981 0 2019-11-14 04:40:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000004 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/4,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.8,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000004,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ab6b205a70ea45b1b28b801e68a4ba84,SystemUUID:65406178-5013-644C-AD46-D7BC6F0DD7BF,BootID:e6b05928-9970-49a5-bd51-149982b32750,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:53:10.818: INFO: Logging kubelet events for node k8s-master-23171212-vmss000004 Nov 14 04:53:10.877: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000004 Nov 14 04:53:10.965: INFO: cloud-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.965: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:53:10.965: INFO: kube-addon-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.965: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:53:10.965: INFO: kube-apiserver-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.965: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:53:10.965: INFO: kube-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.965: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:53:10.965: INFO: azure-ip-masq-agent-47pzk started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.965: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:53:10.965: INFO: kube-proxy-47vmd started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.965: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:53:10.965: INFO: kube-scheduler-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:53:10.965: INFO: Container kube-scheduler ready: true, restart count 0 W1114 04:53:11.022655 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:53:11.166: INFO: Latency metrics for node k8s-master-23171212-vmss000004 Nov 14 04:53:11.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-3367" for this suite. Nov 14 04:53:17.404: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 14 04:53:19.286: INFO: namespace pods-3367 deletion completed in 8.063921893s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[k8s\.io\]\s\[sig\-node\]\sPods\sExtended\s\[k8s\.io\]\sDelete\sGrace\sPeriod\sshould\sbe\ssubmitted\sand\sremoved\s\[Conformance\]$'
test/e2e/framework/framework.go:698 Nov 14 04:54:04.744: kubelet never observed the termination notice Unexpected error: <*errors.errorString | 0xc0000d5090>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred test/e2e/node/pods.go:163from junit_12.xml
[BeforeEach] [k8s.io] [sig-node] Pods Extended test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 14 04:53:19.290: INFO: >>> kubeConfig: /workspace/aks287781815/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename pods �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in pods-3063 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] test/e2e/framework/framework.go:698 �[1mSTEP�[0m: creating the pod �[1mSTEP�[0m: setting up selector �[1mSTEP�[0m: submitting the pod to kubernetes �[1mSTEP�[0m: verifying the pod is in kubernetes Nov 14 04:53:34.080: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75.westus2.cloudapp.azure.com --kubeconfig=/workspace/aks287781815/kubeconfig/kubeconfig.westus2.json proxy -p 0' �[1mSTEP�[0m: deleting the pod gracefully �[1mSTEP�[0m: verifying the kubelet observed the termination notice Nov 14 04:54:04.744: FAIL: kubelet never observed the termination notice Unexpected error: <*errors.errorString | 0xc0000d5090>: { s: "timed out waiting for the condition", } timed out waiting for the condition occurred [AfterEach] [k8s.io] [sig-node] Pods Extended test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "pods-3063". �[1mSTEP�[0m: Found 6 events. Nov 14 04:54:04.802: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for pod-submit-remove-07ef7285-6684-498b-950f-33ea439d63f8: {default-scheduler } Scheduled: Successfully assigned pods-3063/pod-submit-remove-07ef7285-6684-498b-950f-33ea439d63f8 to k8s-agentpool-23171212-vmss000001 Nov 14 04:54:04.802: INFO: At 2019-11-14 04:53:22 +0000 UTC - event for pod-submit-remove-07ef7285-6684-498b-950f-33ea439d63f8: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "docker.io/library/nginx:1.14-alpine" Nov 14 04:54:04.802: INFO: At 2019-11-14 04:53:23 +0000 UTC - event for pod-submit-remove-07ef7285-6684-498b-950f-33ea439d63f8: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "docker.io/library/nginx:1.14-alpine" Nov 14 04:54:04.802: INFO: At 2019-11-14 04:53:24 +0000 UTC - event for pod-submit-remove-07ef7285-6684-498b-950f-33ea439d63f8: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container nginx Nov 14 04:54:04.802: INFO: At 2019-11-14 04:53:24 +0000 UTC - event for pod-submit-remove-07ef7285-6684-498b-950f-33ea439d63f8: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container nginx Nov 14 04:54:04.802: INFO: At 2019-11-14 04:53:34 +0000 UTC - event for pod-submit-remove-07ef7285-6684-498b-950f-33ea439d63f8: {kubelet k8s-agentpool-23171212-vmss000001} Killing: Stopping container nginx Nov 14 04:54:04.857: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 04:54:04.857: INFO: pod-submit-remove-07ef7285-6684-498b-950f-33ea439d63f8 k8s-agentpool-23171212-vmss000001 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:19 +0000 UTC }] Nov 14 04:54:04.857: INFO: Nov 14 04:54:04.967: INFO: Logging node info for node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:05.022: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000000 /api/v1/nodes/k8s-agentpool-23171212-vmss000000 0f3bbebc-9d46-4ddd-a1dc-c93db8b52883 28910 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2202":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-8364":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-8403":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-1206":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-2585":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-5498":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-6633":"k8s-agentpool-23171212-vmss000000","csi-mock-csi-mock-volumes-4558":"csi-mock-csi-mock-volumes-4558","csi-mock-csi-mock-volumes-6397":"csi-mock-csi-mock-volumes-6397","csi-mock-csi-mock-volumes-7486":"csi-mock-csi-mock-volumes-7486","csi-mock-csi-mock-volumes-7581":"csi-mock-csi-mock-volumes-7581","csi-mock-csi-mock-volumes-8512":"csi-mock-csi-mock-volumes-8512","csi-mock-csi-mock-volumes-9601":"csi-mock-csi-mock-volumes-9601"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:48 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:48 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:48 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.4,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:359d6aea81114a07a8070169aad06c4a,SystemUUID:A77EC1C1-102D-514B-A3FC-E5E916EF17BD,BootID:fc99ebb5-9bcd-41e5-aad2-849e47da2eea,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c k8s.gcr.io/coredns:1.5.0],SizeBytes:42488424,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-volume-expand-6633^bb8f80e2-069a-11ea-af09-000d3ac2fa68],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volume-expand-6633^bb8f80e2-069a-11ea-af09-000d3ac2fa68,DevicePath:,},},Config:nil,},} Nov 14 04:54:05.022: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:05.081: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:05.164: INFO: blobfuse-flexvol-installer-6xhz6 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:54:05.165: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:53:08 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:05.165: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:53:47 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:05.165: INFO: coredns-87f5d796-k7mr9 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container coredns ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:53:17 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-snapshotter-0 started at 2019-11-14 04:53:17 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:53:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:05.165: INFO: pod-subpath-test-local-preprovisionedpv-dr5j started at 2019-11-14 04:53:54 +0000 UTC (2+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Init container init-volume-local-preprovisionedpv-dr5j ready: true, restart count 0 Nov 14 04:54:05.165: INFO: Init container test-init-volume-local-preprovisionedpv-dr5j ready: true, restart count 0 Nov 14 04:54:05.165: INFO: Container test-container-subpath-local-preprovisionedpv-dr5j ready: false, restart count 0 Nov 14 04:54:05.165: INFO: keyvault-flexvolume-ljqsq started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:54:05.165: INFO: kubernetes-dashboard-65966766b9-b8ps7 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 14 04:54:05.165: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:53:02 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:53:16 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:53:36 +0000 UTC (0+3 container statuses recorded) Nov 14 04:54:05.165: INFO: Container hostpath ready: true, restart count 0 Nov 14 04:54:05.165: INFO: Container liveness-probe ready: true, restart count 0 Nov 14 04:54:05.165: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:53:16 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:05.165: INFO: ss2-1 started at 2019-11-14 04:53:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-snapshotter-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:53:59 +0000 UTC (0+0 container statuses recorded) Nov 14 04:54:05.165: INFO: exec-volume-test-local-preprovisionedpv-5wq6 started at 2019-11-14 04:53:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container exec-container-local-preprovisionedpv-5wq6 ready: false, restart count 0 Nov 14 04:54:05.165: INFO: azure-ip-masq-agent-dgg69 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:05.165: INFO: pvc-datasource-writer-7rbg4 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container volume-tester ready: false, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:51:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:53:37 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:54:04 +0000 UTC (0+0 container statuses recorded) Nov 14 04:54:05.165: INFO: kube-proxy-cdq9f started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-snapshotter-0 started at 2019-11-14 04:53:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:05.165: INFO: security-context-06568e16-f019-4982-a45b-c9957222ee01 started at 2019-11-14 04:53:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container write-pod ready: true, restart count 0 Nov 14 04:54:05.165: INFO: ss2-1 started at 2019-11-14 04:53:07 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-snapshotter-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:05.165: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:53:16 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:53:37 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:05.165: INFO: pod-subpath-test-local-preprovisionedpv-d6hv started at 2019-11-14 04:53:24 +0000 UTC (1+1 container statuses recorded) Nov 14 04:54:05.165: INFO: Init container init-volume-local-preprovisionedpv-d6hv ready: true, restart count 0 Nov 14 04:54:05.165: INFO: Container test-container-subpath-local-preprovisionedpv-d6hv ready: false, restart count 0 Nov 14 04:54:05.165: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:54:03 +0000 UTC (0+0 container statuses recorded) W1114 04:54:05.221850 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:05.366: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:05.366: INFO: Logging node info for node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:05.426: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000001 /api/v1/nodes/k8s-agentpool-23171212-vmss000001 e9c1f552-b95b-4548-9ecd-37a7f1925e75 28774 0 2019-11-14 04:40:09 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-6971":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3033":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3310":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-4400":"k8s-agentpool-23171212-vmss000001","csi-hostpath-volume-expand-2485":"k8s-agentpool-23171212-vmss000001","csi-mock-csi-mock-volumes-3324":"csi-mock-csi-mock-volumes-3324","csi-mock-csi-mock-volumes-3770":"csi-mock-csi-mock-volumes-3770","csi-mock-csi-mock-volumes-9859":"csi-mock-csi-mock-volumes-9859"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},example.com/fakecpu: {{800 0} {<nil>} 800 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},example.com/fakecpu: {{800 0} {<nil>} 800 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.5,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:639707efd7a74ac4bca6a608e99a6715,SystemUUID:CACA620B-0C7C-7040-A716-91F766CA5A2F,BootID:9fabe02f-4e56-4162-b5c5-2e2733911b4f,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0],SizeBytes:71821984,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 k8s.gcr.io/metrics-server-amd64:v0.2.1],SizeBytes:42541759,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:05.426: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:05.485: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:05.548: INFO: rs-pod1-6rq9f started at 2019-11-14 04:53:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:05.548: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:53:18 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:05.548: INFO: rs-pod1-qbt2h started at 2019-11-14 04:53:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:05.548: INFO: rs-pod1-qvw5b started at 2019-11-14 04:53:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:05.548: INFO: metadata-volume-81f3141a-e2db-4574-9386-0df8ae75e38d started at 2019-11-14 04:54:00 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container client-container ready: false, restart count 0 Nov 14 04:54:05.548: INFO: pod-1c0b5786-d6cf-411c-b1ec-0ca9fade1994 started at 2019-11-14 04:53:55 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container test-container ready: false, restart count 0 Nov 14 04:54:05.548: INFO: replace-1573707240-rjr5h started at 2019-11-14 04:54:02 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container c ready: false, restart count 0 Nov 14 04:54:05.548: INFO: kube-proxy-ng7z8 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:05.548: INFO: ss2-0 started at 2019-11-14 04:52:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:05.548: INFO: ss2-0 started at 2019-11-14 04:53:12 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:05.548: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:53:08 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:05.548: INFO: downward-api-84a8a3d2-3d4f-420e-a571-fed734e255e2 started at 2019-11-14 04:53:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container dapi-container ready: false, restart count 0 Nov 14 04:54:05.548: INFO: azure-ip-masq-agent-mcg7w started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:05.548: INFO: metrics-server-58ff8c5ddf-h7jqs started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container metrics-server ready: true, restart count 0 Nov 14 04:54:05.548: INFO: pod-subpath-test-configmap-8t8x started at 2019-11-14 04:53:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container test-container-subpath-configmap-8t8x ready: false, restart count 0 Nov 14 04:54:05.548: INFO: busybox-host-aliasese1468a1f-ed82-40e1-ac46-33c91b10f88b started at 2019-11-14 04:53:23 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container busybox-host-aliasese1468a1f-ed82-40e1-ac46-33c91b10f88b ready: true, restart count 0 Nov 14 04:54:05.548: INFO: external-provisioner-psrp2 started at 2019-11-14 04:51:42 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 14 04:54:05.548: INFO: local-injector started at 2019-11-14 04:53:24 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container local-injector ready: true, restart count 0 Nov 14 04:54:05.548: INFO: pod-secrets-43072a86-22c1-4f43-af43-52a8e723aac1 started at 2019-11-14 04:52:16 +0000 UTC (0+3 container statuses recorded) Nov 14 04:54:05.548: INFO: Container creates-volume-test ready: true, restart count 0 Nov 14 04:54:05.548: INFO: Container dels-volume-test ready: true, restart count 0 Nov 14 04:54:05.548: INFO: Container upds-volume-test ready: true, restart count 0 Nov 14 04:54:05.548: INFO: nfs-server started at 2019-11-14 04:51:59 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container nfs-server ready: false, restart count 0 Nov 14 04:54:05.548: INFO: pod-handle-http-request started at 2019-11-14 04:53:32 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container pod-handle-http-request ready: false, restart count 0 Nov 14 04:54:05.548: INFO: external-provisioner-86l4g started at 2019-11-14 04:53:36 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container nfs-provisioner ready: false, restart count 0 Nov 14 04:54:05.548: INFO: blobfuse-flexvol-installer-ktdjj started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:54:05.548: INFO: tiller-deploy-7559b6b885-vkxml started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container tiller ready: true, restart count 0 Nov 14 04:54:05.548: INFO: ss2-2 started at 2019-11-14 04:53:32 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container webserver ready: false, restart count 0 Nov 14 04:54:05.548: INFO: pod-subpath-test-local-preprovisionedpv-ptqj started at 2019-11-14 04:53:39 +0000 UTC (2+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Init container init-volume-local-preprovisionedpv-ptqj ready: false, restart count 0 Nov 14 04:54:05.548: INFO: Init container test-init-volume-local-preprovisionedpv-ptqj ready: false, restart count 0 Nov 14 04:54:05.548: INFO: Container test-container-subpath-local-preprovisionedpv-ptqj ready: false, restart count 0 Nov 14 04:54:05.548: INFO: liveness-7bade2fa-8e64-4f6d-9649-51e1d1a6d745 started at 2019-11-14 04:49:59 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container liveness ready: true, restart count 0 Nov 14 04:54:05.548: INFO: ss2-2 started at 2019-11-14 04:53:41 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container webserver ready: false, restart count 0 Nov 14 04:54:05.548: INFO: pod-submit-remove-07ef7285-6684-498b-950f-33ea439d63f8 started at 2019-11-14 04:53:19 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container nginx ready: false, restart count 0 Nov 14 04:54:05.548: INFO: rs-pod1-h6c77 started at 2019-11-14 04:53:48 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:05.548: INFO: external-provisioner-wmbtv started at 2019-11-14 04:53:03 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 14 04:54:05.548: INFO: keyvault-flexvolume-2g62m started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:54:05.548: INFO: downwardapi-volume-a91c6843-6386-4ee8-8f66-f8b61b038a21 started at 2019-11-14 04:53:42 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container client-container ready: false, restart count 0 Nov 14 04:54:05.548: INFO: rs-pod1-zkjdq started at 2019-11-14 04:53:46 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:05.548: INFO: bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 started at 2019-11-14 04:53:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 ready: false, restart count 0 Nov 14 04:54:05.548: INFO: metadata-volume-c84ae3d5-97a5-4cb7-8fe3-5d5d666a05da started at 2019-11-14 04:54:00 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:05.548: INFO: Container client-container ready: false, restart count 0 W1114 04:54:05.603654 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:07.534: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:07.534: INFO: Logging node info for node k8s-master-23171212-vmss000000 Nov 14 04:54:07.590: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000000 /api/v1/nodes/k8s-master-23171212-vmss000000 6c9bb7ee-6dcf-4c6d-a8ad-0377f76a60f6 29063 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.4,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:813714caae2d48f4a9036e17505029ae,SystemUUID:A7C76EFE-4E2A-8042-A754-6642A667D859,BootID:245ff6cc-bfb4-4487-ac55-fb3813c9167c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:07.590: INFO: Logging kubelet events for node k8s-master-23171212-vmss000000 Nov 14 04:54:07.648: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000000 Nov 14 04:54:07.706: INFO: cloud-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:07.706: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:07.706: INFO: kube-addon-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:07.707: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:07.707: INFO: kube-apiserver-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:07.707: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:07.707: INFO: kube-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:07.707: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:07.707: INFO: azure-ip-masq-agent-q7rgb started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:07.707: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:07.707: INFO: kube-proxy-cpnbb started at 2019-11-14 04:40:28 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:07.707: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:07.707: INFO: kube-scheduler-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:07.707: INFO: Container kube-scheduler ready: true, restart count 0 W1114 04:54:07.766170 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:07.888: INFO: Latency metrics for node k8s-master-23171212-vmss000000 Nov 14 04:54:07.888: INFO: Logging node info for node k8s-master-23171212-vmss000001 Nov 14 04:54:07.943: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000001 /api/v1/nodes/k8s-master-23171212-vmss000001 202620f8-2cc3-4eb6-b880-ef6d6d9fbccd 29086 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.5.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.5,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cafe5635afe4ac8baa078419003bc32,SystemUUID:88981890-9531-334C-9D46-A02D5E4BD18D,BootID:6accdcbe-b0af-4be0-8f82-19833a9a5e2e,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:07.943: INFO: Logging kubelet events for node k8s-master-23171212-vmss000001 Nov 14 04:54:08.007: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000001 Nov 14 04:54:08.070: INFO: kube-scheduler-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.070: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:08.070: INFO: cloud-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.070: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:08.070: INFO: kube-addon-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.070: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:08.070: INFO: kube-apiserver-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.070: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:08.070: INFO: kube-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.070: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:08.070: INFO: azure-ip-masq-agent-dnl49 started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.070: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:08.070: INFO: kube-proxy-srv2s started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.070: INFO: Container kube-proxy ready: true, restart count 0 W1114 04:54:08.134588 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:08.263: INFO: Latency metrics for node k8s-master-23171212-vmss000001 Nov 14 04:54:08.263: INFO: Logging node info for node k8s-master-23171212-vmss000002 Nov 14 04:54:08.319: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000002 /api/v1/nodes/k8s-master-23171212-vmss000002 8eca3a9a-6fd5-4796-82bb-2f37c6fc30b7 28036 0 2019-11-14 04:41:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000002 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.6.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/2,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.6.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284883456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498451456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:41:18 +0000 UTC,LastTransitionTime:2019-11-14 04:41:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:26 +0000 UTC,LastTransitionTime:2019-11-14 04:41:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000002,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eb5abe50949445b79334d994c94314f8,SystemUUID:E11F8710-4785-DA42-B98E-8E97145F92C7,BootID:8fe9e9b2-2b16-4895-91c7-dc676b577942,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:08.319: INFO: Logging kubelet events for node k8s-master-23171212-vmss000002 Nov 14 04:54:08.377: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000002 Nov 14 04:54:08.434: INFO: azure-ip-masq-agent-mw27f started at 2019-11-14 04:41:05 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.434: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:08.434: INFO: kube-proxy-4vs6q started at 2019-11-14 04:41:06 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.434: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:08.434: INFO: kube-addon-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.434: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:08.434: INFO: kube-apiserver-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.434: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:08.434: INFO: kube-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.434: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:08.434: INFO: kube-scheduler-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.434: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:08.434: INFO: cloud-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.434: INFO: Container cloud-controller-manager ready: true, restart count 0 W1114 04:54:08.490532 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:08.611: INFO: Latency metrics for node k8s-master-23171212-vmss000002 Nov 14 04:54:08.611: INFO: Logging node info for node k8s-master-23171212-vmss000003 Nov 14 04:54:08.666: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000003 /api/v1/nodes/k8s-master-23171212-vmss000003 b1a400e7-f6ff-4241-9175-cd8bd70dd11a 29068 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000003 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/3,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.7,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000003,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:effe7f682034467995d1db3ee85a4a38,SystemUUID:2073A143-352C-D241-B189-4A1DCC64C62C,BootID:6c95e89b-c056-494f-b817-6494fc9fd635,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:08.666: INFO: Logging kubelet events for node k8s-master-23171212-vmss000003 Nov 14 04:54:08.725: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000003 Nov 14 04:54:08.783: INFO: kube-scheduler-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.783: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:08.783: INFO: azure-ip-masq-agent-4s5bk started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.783: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:08.783: INFO: kube-proxy-hrqtx started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.783: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:08.783: INFO: cloud-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.783: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:08.783: INFO: kube-addon-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.783: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:08.783: INFO: kube-apiserver-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.783: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:08.783: INFO: kube-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:08.783: INFO: Container kube-controller-manager ready: true, restart count 0 W1114 04:54:08.839888 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:08.970: INFO: Latency metrics for node k8s-master-23171212-vmss000003 Nov 14 04:54:08.970: INFO: Logging node info for node k8s-master-23171212-vmss000004 Nov 14 04:54:09.025: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000004 /api/v1/nodes/k8s-master-23171212-vmss000004 25a9993c-54fa-45cc-9da7-66c66cafa30f 29165 0 2019-11-14 04:40:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000004 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/4,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.8,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000004,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ab6b205a70ea45b1b28b801e68a4ba84,SystemUUID:65406178-5013-644C-AD46-D7BC6F0DD7BF,BootID:e6b05928-9970-49a5-bd51-149982b32750,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:09.025: INFO: Logging kubelet events for node k8s-master-23171212-vmss000004 Nov 14 04:54:09.083: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000004 Nov 14 04:54:09.141: INFO: azure-ip-masq-agent-47pzk started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:09.141: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:09.141: INFO: kube-proxy-47vmd started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:09.141: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:09.141: INFO: kube-scheduler-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:09.141: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:09.141: INFO: cloud-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:09.141: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:09.141: INFO: kube-addon-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:09.142: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:09.142: INFO: kube-apiserver-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:09.142: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:09.142: INFO: kube-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:09.142: INFO: Container kube-controller-manager ready: true, restart count 0 W1114 04:54:09.197836 92624 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:09.325: INFO: Latency metrics for node k8s-master-23171212-vmss000004 Nov 14 04:54:09.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "pods-3063" for this suite. Nov 14 04:54:53.570: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 14 04:54:55.399: INFO: namespace pods-3063 deletion completed in 46.016748419s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sPreemptionExecutionPath\sruns\sReplicaSets\sto\sverify\spreemption\srunning\spath$'
test/e2e/scheduling/preemption.go:345 Nov 14 04:54:36.481: Unexpected error: <*errors.errorString | 0xc002659790>: { s: "replicaset \"rs-pod1\" never had desired number of .status.availableReplicas", } replicaset "rs-pod1" never had desired number of .status.availableReplicas occurred test/e2e/scheduling/preemption.go:510from junit_02.xml
[BeforeEach] [sig-scheduling] PreemptionExecutionPath test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 14 04:53:21.069: INFO: >>> kubeConfig: /workspace/aks287781815/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption-path �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-2196 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] PreemptionExecutionPath test/e2e/scheduling/preemption.go:302 �[1mSTEP�[0m: Finding an available node �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. Nov 14 04:53:35.849: INFO: found a healthy node: k8s-agentpool-23171212-vmss000001 [It] runs ReplicaSets to verify preemption running path test/e2e/scheduling/preemption.go:345 Nov 14 04:54:36.481: FAIL: Unexpected error: <*errors.errorString | 0xc002659790>: { s: "replicaset \"rs-pod1\" never had desired number of .status.availableReplicas", } replicaset "rs-pod1" never had desired number of .status.availableReplicas occurred [AfterEach] [sig-scheduling] PreemptionExecutionPath test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "sched-preemption-path-2196". �[1mSTEP�[0m: Found 43 events. Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-6rq9f: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-6rq9f: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-2196/rs-pod1-6rq9f to k8s-agentpool-23171212-vmss000001 Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-6rq9f: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-h6c77: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-h6c77: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-2196/rs-pod1-h6c77 to k8s-agentpool-23171212-vmss000001 Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-h6c77: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-qbt2h: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-2196/rs-pod1-qbt2h to k8s-agentpool-23171212-vmss000001 Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-qbt2h: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-qbt2h: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-qvw5b: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-qvw5b: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-2196/rs-pod1-qvw5b to k8s-agentpool-23171212-vmss000001 Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-qvw5b: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-zkjdq: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-2196/rs-pod1-zkjdq to k8s-agentpool-23171212-vmss000001 Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-zkjdq: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-zkjdq: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 04:54:36.590: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-2196/without-label to k8s-agentpool-23171212-vmss000001 Nov 14 04:54:36.590: INFO: At 2019-11-14 04:53:23 +0000 UTC - event for without-label: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:53:25 +0000 UTC - event for without-label: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:53:26 +0000 UTC - event for without-label: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container without-label Nov 14 04:54:36.590: INFO: At 2019-11-14 04:53:26 +0000 UTC - event for without-label: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container without-label Nov 14 04:54:36.590: INFO: At 2019-11-14 04:53:36 +0000 UTC - event for rs-pod1: {replicaset-controller } SuccessfulCreate: Created pod: rs-pod1-6rq9f Nov 14 04:54:36.590: INFO: At 2019-11-14 04:53:36 +0000 UTC - event for rs-pod1: {replicaset-controller } SuccessfulCreate: Created pod: rs-pod1-qvw5b Nov 14 04:54:36.590: INFO: At 2019-11-14 04:53:36 +0000 UTC - event for rs-pod1: {replicaset-controller } SuccessfulCreate: Created pod: rs-pod1-h6c77 Nov 14 04:54:36.590: INFO: At 2019-11-14 04:53:36 +0000 UTC - event for rs-pod1: {replicaset-controller } SuccessfulCreate: Created pod: rs-pod1-qbt2h Nov 14 04:54:36.590: INFO: At 2019-11-14 04:53:36 +0000 UTC - event for rs-pod1: {replicaset-controller } SuccessfulCreate: Created pod: rs-pod1-zkjdq Nov 14 04:54:36.590: INFO: At 2019-11-14 04:53:36 +0000 UTC - event for without-label: {kubelet k8s-agentpool-23171212-vmss000001} Killing: Stopping container without-label Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:15 +0000 UTC - event for rs-pod1-6rq9f: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:16 +0000 UTC - event for rs-pod1-6rq9f: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:17 +0000 UTC - event for rs-pod1-h6c77: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:17 +0000 UTC - event for rs-pod1-h6c77: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:17 +0000 UTC - event for rs-pod1-zkjdq: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:18 +0000 UTC - event for rs-pod1-zkjdq: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:22 +0000 UTC - event for rs-pod1-6rq9f: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container pod1 Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:23 +0000 UTC - event for rs-pod1-h6c77: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container pod1 Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:23 +0000 UTC - event for rs-pod1-zkjdq: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container pod1 Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:25 +0000 UTC - event for rs-pod1-qvw5b: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:26 +0000 UTC - event for rs-pod1-qvw5b: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:29 +0000 UTC - event for rs-pod1-qvw5b: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container pod1 Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:33 +0000 UTC - event for rs-pod1-6rq9f: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container pod1 Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:33 +0000 UTC - event for rs-pod1-qbt2h: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:33 +0000 UTC - event for rs-pod1-zkjdq: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container pod1 Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:34 +0000 UTC - event for rs-pod1-qbt2h: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.1" Nov 14 04:54:36.590: INFO: At 2019-11-14 04:54:36 +0000 UTC - event for rs-pod1-h6c77: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container pod1 Nov 14 04:54:36.647: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 04:54:36.647: INFO: rs-pod1-6rq9f k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:49 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:49 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC }] Nov 14 04:54:36.647: INFO: rs-pod1-h6c77 k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:48 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:48 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:48 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC }] Nov 14 04:54:36.647: INFO: rs-pod1-qbt2h k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:50 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:50 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC }] Nov 14 04:54:36.647: INFO: rs-pod1-qvw5b k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:51 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:51 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:51 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC }] Nov 14 04:54:36.647: INFO: rs-pod1-zkjdq k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC }] Nov 14 04:54:36.647: INFO: Nov 14 04:54:36.816: INFO: Logging node info for node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:36.872: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000000 /api/v1/nodes/k8s-agentpool-23171212-vmss000000 0f3bbebc-9d46-4ddd-a1dc-c93db8b52883 28910 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2202":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-8364":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-8403":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-1206":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-2585":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-5498":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-6633":"k8s-agentpool-23171212-vmss000000","csi-mock-csi-mock-volumes-4558":"csi-mock-csi-mock-volumes-4558","csi-mock-csi-mock-volumes-6397":"csi-mock-csi-mock-volumes-6397","csi-mock-csi-mock-volumes-7486":"csi-mock-csi-mock-volumes-7486","csi-mock-csi-mock-volumes-7581":"csi-mock-csi-mock-volumes-7581","csi-mock-csi-mock-volumes-8512":"csi-mock-csi-mock-volumes-8512","csi-mock-csi-mock-volumes-9601":"csi-mock-csi-mock-volumes-9601"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:48 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:48 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:48 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.4,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:359d6aea81114a07a8070169aad06c4a,SystemUUID:A77EC1C1-102D-514B-A3FC-E5E916EF17BD,BootID:fc99ebb5-9bcd-41e5-aad2-849e47da2eea,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c k8s.gcr.io/coredns:1.5.0],SizeBytes:42488424,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-volume-expand-6633^bb8f80e2-069a-11ea-af09-000d3ac2fa68],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-volume-expand-6633^bb8f80e2-069a-11ea-af09-000d3ac2fa68,DevicePath:,},},Config:nil,},} Nov 14 04:54:36.872: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:36.935: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:37.054: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:53:37 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:37.054: INFO: ss2-1 started at 2019-11-14 04:53:07 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:37.054: INFO: csi-snapshotter-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:37.054: INFO: pod-subpath-test-local-preprovisionedpv-2mrx started at 2019-11-14 04:54:23 +0000 UTC (2+2 container statuses recorded) Nov 14 04:54:37.054: INFO: Init container init-volume-local-preprovisionedpv-2mrx ready: true, restart count 0 Nov 14 04:54:37.054: INFO: Init container test-init-subpath-local-preprovisionedpv-2mrx ready: true, restart count 0 Nov 14 04:54:37.054: INFO: Container test-container-subpath-local-preprovisionedpv-2mrx ready: false, restart count 0 Nov 14 04:54:37.054: INFO: Container test-container-volume-local-preprovisionedpv-2mrx ready: false, restart count 0 Nov 14 04:54:37.054: INFO: blobfuse-flexvol-installer-6xhz6 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:54:37.054: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:54:32 +0000 UTC (0+0 container statuses recorded) Nov 14 04:54:37.054: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:54:33 +0000 UTC (0+0 container statuses recorded) Nov 14 04:54:37.054: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:53:47 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:37.054: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:53:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:37.054: INFO: coredns-87f5d796-k7mr9 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container coredns ready: true, restart count 0 Nov 14 04:54:37.054: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:53:17 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:37.054: INFO: csi-snapshotter-0 started at 2019-11-14 04:53:17 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:37.054: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:37.054: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:53:36 +0000 UTC (0+3 container statuses recorded) Nov 14 04:54:37.054: INFO: Container hostpath ready: true, restart count 0 Nov 14 04:54:37.054: INFO: Container liveness-probe ready: true, restart count 0 Nov 14 04:54:37.054: INFO: Container node-driver-registrar ready: true, restart count 0 Nov 14 04:54:37.054: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:53:16 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:37.054: INFO: keyvault-flexvolume-ljqsq started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:54:37.054: INFO: kubernetes-dashboard-65966766b9-b8ps7 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 14 04:54:37.054: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:53:16 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:37.054: INFO: ss2-1 started at 2019-11-14 04:53:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:37.054: INFO: csi-snapshotter-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.054: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:37.054: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:51:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.055: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:37.055: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.055: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:54:37.055: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.055: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:37.055: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:53:37 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.055: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:37.055: INFO: azure-ip-masq-agent-dgg69 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.055: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:37.055: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.055: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:54:37.055: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.055: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:54:37.055: INFO: pvc-datasource-writer-7rbg4 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.055: INFO: Container volume-tester ready: false, restart count 0 Nov 14 04:54:37.055: INFO: pod-subpath-test-hostpathsymlink-v8l2 started at 2019-11-14 04:54:34 +0000 UTC (2+2 container statuses recorded) Nov 14 04:54:37.055: INFO: Init container init-volume-hostpathsymlink-v8l2 ready: false, restart count 0 Nov 14 04:54:37.055: INFO: Init container test-init-subpath-hostpathsymlink-v8l2 ready: false, restart count 0 Nov 14 04:54:37.055: INFO: Container test-container-subpath-hostpathsymlink-v8l2 ready: false, restart count 0 Nov 14 04:54:37.055: INFO: Container test-container-volume-hostpathsymlink-v8l2 ready: false, restart count 0 Nov 14 04:54:37.055: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:54:34 +0000 UTC (0+0 container statuses recorded) Nov 14 04:54:37.055: INFO: kube-proxy-cdq9f started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.055: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:37.055: INFO: csi-snapshotter-0 started at 2019-11-14 04:53:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.055: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:54:37.055: INFO: security-context-06568e16-f019-4982-a45b-c9957222ee01 started at 2019-11-14 04:53:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.055: INFO: Container write-pod ready: true, restart count 0 W1114 04:54:37.112409 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:37.264: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000000 Nov 14 04:54:37.264: INFO: Logging node info for node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:37.321: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000001 /api/v1/nodes/k8s-agentpool-23171212-vmss000001 e9c1f552-b95b-4548-9ecd-37a7f1925e75 28774 0 2019-11-14 04:40:09 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-6971":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3033":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3310":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-4400":"k8s-agentpool-23171212-vmss000001","csi-hostpath-volume-expand-2485":"k8s-agentpool-23171212-vmss000001","csi-mock-csi-mock-volumes-3324":"csi-mock-csi-mock-volumes-3324","csi-mock-csi-mock-volumes-3770":"csi-mock-csi-mock-volumes-3770","csi-mock-csi-mock-volumes-9859":"csi-mock-csi-mock-volumes-9859"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},example.com/fakecpu: {{800 0} {<nil>} 800 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},example.com/fakecpu: {{800 0} {<nil>} 800 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:43 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.5,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:639707efd7a74ac4bca6a608e99a6715,SystemUUID:CACA620B-0C7C-7040-A716-91F766CA5A2F,BootID:9fabe02f-4e56-4162-b5c5-2e2733911b4f,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0],SizeBytes:71821984,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 k8s.gcr.io/metrics-server-amd64:v0.2.1],SizeBytes:42541759,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:37.321: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:37.388: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:37.452: INFO: downward-api-84a8a3d2-3d4f-420e-a571-fed734e255e2 started at 2019-11-14 04:53:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container dapi-container ready: false, restart count 0 Nov 14 04:54:37.452: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:53:08 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:37.452: INFO: metrics-server-58ff8c5ddf-h7jqs started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container metrics-server ready: true, restart count 0 Nov 14 04:54:37.452: INFO: pod-subpath-test-configmap-8t8x started at 2019-11-14 04:53:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container test-container-subpath-configmap-8t8x ready: false, restart count 0 Nov 14 04:54:37.452: INFO: busybox-host-aliasese1468a1f-ed82-40e1-ac46-33c91b10f88b started at 2019-11-14 04:53:23 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container busybox-host-aliasese1468a1f-ed82-40e1-ac46-33c91b10f88b ready: true, restart count 0 Nov 14 04:54:37.452: INFO: external-provisioner-psrp2 started at 2019-11-14 04:51:42 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 14 04:54:37.452: INFO: local-injector started at 2019-11-14 04:53:24 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container local-injector ready: true, restart count 0 Nov 14 04:54:37.452: INFO: pod-secrets-43072a86-22c1-4f43-af43-52a8e723aac1 started at 2019-11-14 04:52:16 +0000 UTC (0+3 container statuses recorded) Nov 14 04:54:37.452: INFO: Container creates-volume-test ready: true, restart count 0 Nov 14 04:54:37.452: INFO: Container dels-volume-test ready: true, restart count 0 Nov 14 04:54:37.452: INFO: Container upds-volume-test ready: true, restart count 0 Nov 14 04:54:37.452: INFO: pod-with-poststart-http-hook started at 2019-11-14 04:54:19 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container pod-with-poststart-http-hook ready: false, restart count 0 Nov 14 04:54:37.452: INFO: azure-ip-masq-agent-mcg7w started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:37.452: INFO: external-provisioner-86l4g started at 2019-11-14 04:53:36 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container nfs-provisioner ready: false, restart count 0 Nov 14 04:54:37.452: INFO: pod-handle-http-request started at 2019-11-14 04:53:32 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container pod-handle-http-request ready: true, restart count 0 Nov 14 04:54:37.452: INFO: tiller-deploy-7559b6b885-vkxml started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container tiller ready: true, restart count 0 Nov 14 04:54:37.452: INFO: ss2-2 started at 2019-11-14 04:53:32 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container webserver ready: false, restart count 0 Nov 14 04:54:37.452: INFO: pod-subpath-test-local-preprovisionedpv-ptqj started at 2019-11-14 04:53:39 +0000 UTC (2+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Init container init-volume-local-preprovisionedpv-ptqj ready: false, restart count 0 Nov 14 04:54:37.452: INFO: Init container test-init-volume-local-preprovisionedpv-ptqj ready: false, restart count 0 Nov 14 04:54:37.452: INFO: Container test-container-subpath-local-preprovisionedpv-ptqj ready: false, restart count 0 Nov 14 04:54:37.452: INFO: ss2-2 started at 2019-11-14 04:53:41 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container webserver ready: false, restart count 0 Nov 14 04:54:37.452: INFO: blobfuse-flexvol-installer-ktdjj started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:54:37.452: INFO: pod-submit-remove-07ef7285-6684-498b-950f-33ea439d63f8 started at 2019-11-14 04:53:19 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container nginx ready: false, restart count 0 Nov 14 04:54:37.452: INFO: rs-pod1-h6c77 started at 2019-11-14 04:53:48 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:37.452: INFO: external-provisioner-wmbtv started at 2019-11-14 04:53:03 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container nfs-provisioner ready: false, restart count 0 Nov 14 04:54:37.452: INFO: downwardapi-volume-a91c6843-6386-4ee8-8f66-f8b61b038a21 started at 2019-11-14 04:53:42 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container client-container ready: false, restart count 0 Nov 14 04:54:37.452: INFO: rs-pod1-zkjdq started at 2019-11-14 04:53:46 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:37.452: INFO: bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 started at 2019-11-14 04:53:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.452: INFO: Container bin-falsee557e594-74c9-40fb-bc3d-b0eb2d920c80 ready: false, restart count 0 Nov 14 04:54:37.452: INFO: metadata-volume-c84ae3d5-97a5-4cb7-8fe3-5d5d666a05da started at 2019-11-14 04:54:00 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container client-container ready: false, restart count 0 Nov 14 04:54:37.453: INFO: keyvault-flexvolume-2g62m started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:54:37.453: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:53:18 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:54:37.453: INFO: rs-pod1-qbt2h started at 2019-11-14 04:53:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:37.453: INFO: rs-pod1-qvw5b started at 2019-11-14 04:53:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:37.453: INFO: metadata-volume-81f3141a-e2db-4574-9386-0df8ae75e38d started at 2019-11-14 04:54:00 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container client-container ready: false, restart count 0 Nov 14 04:54:37.453: INFO: rs-pod1-6rq9f started at 2019-11-14 04:53:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container pod1 ready: false, restart count 0 Nov 14 04:54:37.453: INFO: replace-1573707240-rjr5h started at 2019-11-14 04:54:02 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container c ready: false, restart count 0 Nov 14 04:54:37.453: INFO: pod-configmaps-0f0e6626-21fa-4202-9d8d-a7085374f1eb started at 2019-11-14 04:54:23 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container configmap-volume-test ready: false, restart count 0 Nov 14 04:54:37.453: INFO: pod-1c0b5786-d6cf-411c-b1ec-0ca9fade1994 started at 2019-11-14 04:53:55 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container test-container ready: false, restart count 0 Nov 14 04:54:37.453: INFO: downward-api-f7a2bc99-e044-4176-a95e-80890fa852c7 started at 2019-11-14 04:54:18 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container dapi-container ready: false, restart count 0 Nov 14 04:54:37.453: INFO: downwardapi-volume-aa91b37f-436b-4bfe-9322-393bf1619731 started at 2019-11-14 04:54:21 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container client-container ready: false, restart count 0 Nov 14 04:54:37.453: INFO: ss2-0 started at 2019-11-14 04:52:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:37.453: INFO: ss2-0 started at 2019-11-14 04:53:12 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container webserver ready: true, restart count 0 Nov 14 04:54:37.453: INFO: kube-proxy-ng7z8 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:37.453: INFO: Container kube-proxy ready: true, restart count 0 W1114 04:54:37.508933 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:38.197: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000001 Nov 14 04:54:38.197: INFO: Logging node info for node k8s-master-23171212-vmss000000 Nov 14 04:54:38.262: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000000 /api/v1/nodes/k8s-master-23171212-vmss000000 6c9bb7ee-6dcf-4c6d-a8ad-0377f76a60f6 29063 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.4,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:813714caae2d48f4a9036e17505029ae,SystemUUID:A7C76EFE-4E2A-8042-A754-6642A667D859,BootID:245ff6cc-bfb4-4487-ac55-fb3813c9167c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:38.262: INFO: Logging kubelet events for node k8s-master-23171212-vmss000000 Nov 14 04:54:38.322: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000000 Nov 14 04:54:38.403: INFO: kube-addon-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.403: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:38.403: INFO: kube-apiserver-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.403: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:38.403: INFO: kube-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.403: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:38.403: INFO: azure-ip-masq-agent-q7rgb started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.403: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:38.403: INFO: kube-proxy-cpnbb started at 2019-11-14 04:40:28 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.403: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:38.403: INFO: kube-scheduler-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.403: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:38.403: INFO: cloud-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.403: INFO: Container cloud-controller-manager ready: true, restart count 0 W1114 04:54:38.460973 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:38.585: INFO: Latency metrics for node k8s-master-23171212-vmss000000 Nov 14 04:54:38.585: INFO: Logging node info for node k8s-master-23171212-vmss000001 Nov 14 04:54:38.641: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000001 /api/v1/nodes/k8s-master-23171212-vmss000001 202620f8-2cc3-4eb6-b880-ef6d6d9fbccd 29086 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.5.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:57 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.5,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cafe5635afe4ac8baa078419003bc32,SystemUUID:88981890-9531-334C-9D46-A02D5E4BD18D,BootID:6accdcbe-b0af-4be0-8f82-19833a9a5e2e,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:38.641: INFO: Logging kubelet events for node k8s-master-23171212-vmss000001 Nov 14 04:54:38.705: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000001 Nov 14 04:54:38.791: INFO: kube-proxy-srv2s started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.791: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:38.791: INFO: kube-scheduler-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.791: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:38.791: INFO: cloud-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.791: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:38.791: INFO: kube-addon-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.791: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:38.791: INFO: kube-apiserver-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.791: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:38.791: INFO: kube-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.791: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:38.791: INFO: azure-ip-masq-agent-dnl49 started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:38.791: INFO: Container azure-ip-masq-agent ready: true, restart count 0 W1114 04:54:38.849667 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:38.973: INFO: Latency metrics for node k8s-master-23171212-vmss000001 Nov 14 04:54:38.973: INFO: Logging node info for node k8s-master-23171212-vmss000002 Nov 14 04:54:39.028: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000002 /api/v1/nodes/k8s-master-23171212-vmss000002 8eca3a9a-6fd5-4796-82bb-2f37c6fc30b7 29539 0 2019-11-14 04:41:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000002 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.6.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/2,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.6.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284883456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498451456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:41:18 +0000 UTC,LastTransitionTime:2019-11-14 04:41:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:54:26 +0000 UTC,LastTransitionTime:2019-11-14 04:41:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000002,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eb5abe50949445b79334d994c94314f8,SystemUUID:E11F8710-4785-DA42-B98E-8E97145F92C7,BootID:8fe9e9b2-2b16-4895-91c7-dc676b577942,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:39.029: INFO: Logging kubelet events for node k8s-master-23171212-vmss000002 Nov 14 04:54:39.089: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000002 Nov 14 04:54:39.169: INFO: kube-proxy-4vs6q started at 2019-11-14 04:41:06 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.169: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:39.169: INFO: kube-addon-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.169: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:39.169: INFO: kube-apiserver-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.169: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:39.169: INFO: kube-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.169: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:39.169: INFO: kube-scheduler-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.169: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:39.169: INFO: cloud-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.169: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:39.169: INFO: azure-ip-masq-agent-mw27f started at 2019-11-14 04:41:05 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.169: INFO: Container azure-ip-masq-agent ready: true, restart count 0 W1114 04:54:39.226661 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:39.352: INFO: Latency metrics for node k8s-master-23171212-vmss000002 Nov 14 04:54:39.352: INFO: Logging node info for node k8s-master-23171212-vmss000003 Nov 14 04:54:39.407: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000003 /api/v1/nodes/k8s-master-23171212-vmss000003 b1a400e7-f6ff-4241-9175-cd8bd70dd11a 29068 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000003 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/3,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:53:55 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.7,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000003,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:effe7f682034467995d1db3ee85a4a38,SystemUUID:2073A143-352C-D241-B189-4A1DCC64C62C,BootID:6c95e89b-c056-494f-b817-6494fc9fd635,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:39.407: INFO: Logging kubelet events for node k8s-master-23171212-vmss000003 Nov 14 04:54:39.468: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000003 Nov 14 04:54:39.549: INFO: kube-scheduler-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.549: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:39.549: INFO: azure-ip-masq-agent-4s5bk started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.549: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:54:39.549: INFO: kube-proxy-hrqtx started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.549: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:39.549: INFO: cloud-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.549: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:39.549: INFO: kube-addon-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.549: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:39.549: INFO: kube-apiserver-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.549: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:39.549: INFO: kube-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.549: INFO: Container kube-controller-manager ready: true, restart count 0 W1114 04:54:39.607205 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:39.737: INFO: Latency metrics for node k8s-master-23171212-vmss000003 Nov 14 04:54:39.737: INFO: Logging node info for node k8s-master-23171212-vmss000004 Nov 14 04:54:39.794: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000004 /api/v1/nodes/k8s-master-23171212-vmss000004 25a9993c-54fa-45cc-9da7-66c66cafa30f 29165 0 2019-11-14 04:40:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000004 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/4,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:54:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.8,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000004,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ab6b205a70ea45b1b28b801e68a4ba84,SystemUUID:65406178-5013-644C-AD46-D7BC6F0DD7BF,BootID:e6b05928-9970-49a5-bd51-149982b32750,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:54:39.794: INFO: Logging kubelet events for node k8s-master-23171212-vmss000004 Nov 14 04:54:39.854: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000004 Nov 14 04:54:39.932: INFO: kube-proxy-47vmd started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.932: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:54:39.932: INFO: kube-scheduler-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.932: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:54:39.932: INFO: cloud-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.932: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:54:39.932: INFO: kube-addon-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.932: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:54:39.932: INFO: kube-apiserver-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.932: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:54:39.932: INFO: kube-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.932: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:54:39.932: INFO: azure-ip-masq-agent-47pzk started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:54:39.932: INFO: Container azure-ip-masq-agent ready: true, restart count 0 W1114 04:54:39.990672 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:54:40.120: INFO: Latency metrics for node k8s-master-23171212-vmss000004 Nov 14 04:54:40.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-path-2196" for this suite. Nov 14 04:56:10.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 14 04:56:12.298: INFO: namespace sched-preemption-path-2196 deletion completed in 1m32.118272821s [AfterEach] [sig-scheduling] PreemptionExecutionPath test/e2e/scheduling/preemption.go:274 Nov 14 04:56:12.354: INFO: List existing priorities: Nov 14 04:56:12.354: INFO: p1/1 created at 2019-11-14 04:53:36 +0000 UTC Nov 14 04:56:12.354: INFO: p2/2 created at 2019-11-14 04:53:36 +0000 UTC Nov 14 04:56:12.354: INFO: p3/3 created at 2019-11-14 04:53:36 +0000 UTC Nov 14 04:56:12.354: INFO: p4/4 created at 2019-11-14 04:53:36 +0000 UTC Nov 14 04:56:12.354: INFO: system-cluster-critical/2000000000 created at 2019-11-14 04:40:04 +0000 UTC Nov 14 04:56:12.354: INFO: system-node-critical/2000001000 created at 2019-11-14 04:40:04 +0000 UTC
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-scheduling\]\sPreemptionExecutionPath\sruns\sReplicaSets\sto\sverify\spreemption\srunning\spath$'
test/e2e/scheduling/preemption.go:345 Nov 14 05:00:12.095: Unexpected error: <*errors.errorString | 0xc0018fa460>: { s: "replicaset \"rs-pod1\" never had desired number of .status.availableReplicas", } replicaset "rs-pod1" never had desired number of .status.availableReplicas occurred test/e2e/scheduling/preemption.go:510from junit_02.xml
[BeforeEach] [sig-scheduling] PreemptionExecutionPath test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 14 04:56:12.692: INFO: >>> kubeConfig: /workspace/aks287781815/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename sched-preemption-path �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in sched-preemption-path-8399 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] PreemptionExecutionPath test/e2e/scheduling/preemption.go:302 �[1mSTEP�[0m: Finding an available node �[1mSTEP�[0m: Trying to launch a pod without a label to get a node which can launch it. �[1mSTEP�[0m: Explicitly delete pod here to free the resource it takes. Nov 14 04:59:11.471: INFO: found a healthy node: k8s-agentpool-23171212-vmss000001 [It] runs ReplicaSets to verify preemption running path test/e2e/scheduling/preemption.go:345 Nov 14 05:00:12.095: FAIL: Unexpected error: <*errors.errorString | 0xc0018fa460>: { s: "replicaset \"rs-pod1\" never had desired number of .status.availableReplicas", } replicaset "rs-pod1" never had desired number of .status.availableReplicas occurred [AfterEach] [sig-scheduling] PreemptionExecutionPath test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "sched-preemption-path-8399". �[1mSTEP�[0m: Found 32 events. Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-67w6l: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-67w6l: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-8399/rs-pod1-67w6l to k8s-agentpool-23171212-vmss000001 Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-67w6l: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-7zlhd: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-7zlhd: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-7zlhd: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-8399/rs-pod1-7zlhd to k8s-agentpool-23171212-vmss000001 Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-nkr8s: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-nkr8s: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-8399/rs-pod1-nkr8s to k8s-agentpool-23171212-vmss000001 Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-wv2wp: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-wv2wp: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-8399/rs-pod1-wv2wp to k8s-agentpool-23171212-vmss000001 Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-wv2wp: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-xg7b6: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-xg7b6: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-8399/rs-pod1-xg7b6 to k8s-agentpool-23171212-vmss000001 Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for rs-pod1-xg7b6: {default-scheduler } FailedScheduling: 0/7 nodes are available: 6 node(s) didn't match node selector, 7 Insufficient example.com/fakecpu. Nov 14 05:00:12.204: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for without-label: {default-scheduler } Scheduled: Successfully assigned sched-preemption-path-8399/without-label to k8s-agentpool-23171212-vmss000001 Nov 14 05:00:12.204: INFO: At 2019-11-14 04:57:10 +0000 UTC - event for without-label: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "k8s.gcr.io/pause:3.1" Nov 14 05:00:12.204: INFO: At 2019-11-14 04:58:27 +0000 UTC - event for without-label: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.1" Nov 14 05:00:12.204: INFO: At 2019-11-14 04:58:32 +0000 UTC - event for without-label: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container without-label Nov 14 05:00:12.204: INFO: At 2019-11-14 04:58:48 +0000 UTC - event for without-label: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container without-label Nov 14 05:00:12.204: INFO: At 2019-11-14 04:59:11 +0000 UTC - event for rs-pod1: {replicaset-controller } SuccessfulCreate: Created pod: rs-pod1-wv2wp Nov 14 05:00:12.204: INFO: At 2019-11-14 04:59:11 +0000 UTC - event for rs-pod1: {replicaset-controller } SuccessfulCreate: Created pod: rs-pod1-xg7b6 Nov 14 05:00:12.204: INFO: At 2019-11-14 04:59:11 +0000 UTC - event for rs-pod1: {replicaset-controller } SuccessfulCreate: Created pod: rs-pod1-67w6l Nov 14 05:00:12.204: INFO: At 2019-11-14 04:59:12 +0000 UTC - event for rs-pod1: {replicaset-controller } SuccessfulCreate: Created pod: rs-pod1-nkr8s Nov 14 05:00:12.204: INFO: At 2019-11-14 04:59:12 +0000 UTC - event for rs-pod1: {replicaset-controller } SuccessfulCreate: Created pod: rs-pod1-7zlhd Nov 14 05:00:12.204: INFO: At 2019-11-14 04:59:13 +0000 UTC - event for without-label: {kubelet k8s-agentpool-23171212-vmss000001} Killing: Stopping container without-label Nov 14 05:00:12.204: INFO: At 2019-11-14 04:59:57 +0000 UTC - event for rs-pod1-nkr8s: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "k8s.gcr.io/pause:3.1" Nov 14 05:00:12.204: INFO: At 2019-11-14 04:59:59 +0000 UTC - event for rs-pod1-nkr8s: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.1" Nov 14 05:00:12.204: INFO: At 2019-11-14 05:00:03 +0000 UTC - event for rs-pod1-7zlhd: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "k8s.gcr.io/pause:3.1" Nov 14 05:00:12.204: INFO: At 2019-11-14 05:00:03 +0000 UTC - event for rs-pod1-7zlhd: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "k8s.gcr.io/pause:3.1" Nov 14 05:00:12.204: INFO: At 2019-11-14 05:00:04 +0000 UTC - event for rs-pod1-nkr8s: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container pod1 Nov 14 05:00:12.204: INFO: At 2019-11-14 05:00:06 +0000 UTC - event for rs-pod1-wv2wp: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "k8s.gcr.io/pause:3.1" Nov 14 05:00:12.204: INFO: At 2019-11-14 05:00:06 +0000 UTC - event for rs-pod1-xg7b6: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "k8s.gcr.io/pause:3.1" Nov 14 05:00:12.261: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 05:00:12.261: INFO: rs-pod1-67w6l k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:22 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:22 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:16 +0000 UTC }] Nov 14 05:00:12.261: INFO: rs-pod1-7zlhd k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:18 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:18 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:18 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:14 +0000 UTC }] Nov 14 05:00:12.261: INFO: rs-pod1-nkr8s k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:15 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:15 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:14 +0000 UTC }] Nov 14 05:00:12.261: INFO: rs-pod1-wv2wp k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:21 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:21 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:16 +0000 UTC }] Nov 14 05:00:12.261: INFO: rs-pod1-xg7b6 k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:17 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:17 +0000 UTC ContainersNotReady containers with unready status: [pod1]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:59:14 +0000 UTC }] Nov 14 05:00:12.261: INFO: Nov 14 05:00:12.428: INFO: Logging node info for node k8s-agentpool-23171212-vmss000000 Nov 14 05:00:12.484: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000000 /api/v1/nodes/k8s-agentpool-23171212-vmss000000 0f3bbebc-9d46-4ddd-a1dc-c93db8b52883 35066 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2202":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-5393":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-6454":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-6474":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-8364":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-8403":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-1206":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-2585":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-5498":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-6633":"k8s-agentpool-23171212-vmss000000","csi-mock-csi-mock-volumes-4558":"csi-mock-csi-mock-volumes-4558","csi-mock-csi-mock-volumes-6397":"csi-mock-csi-mock-volumes-6397","csi-mock-csi-mock-volumes-7486":"csi-mock-csi-mock-volumes-7486","csi-mock-csi-mock-volumes-7581":"csi-mock-csi-mock-volumes-7581","csi-mock-csi-mock-volumes-8512":"csi-mock-csi-mock-volumes-8512","csi-mock-csi-mock-volumes-9601":"csi-mock-csi-mock-volumes-9601"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:39 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:39 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:39 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:59:39 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.4,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:359d6aea81114a07a8070169aad06c4a,SystemUUID:A77EC1C1-102D-514B-A3FC-E5E916EF17BD,BootID:fc99ebb5-9bcd-41e5-aad2-849e47da2eea,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c k8s.gcr.io/coredns:1.5.0],SizeBytes:42488424,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-hostpath-provisioning-5393^8d047f8c-069b-11ea-a372-000d3ac2fa68],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-hostpath-provisioning-5393^8d047f8c-069b-11ea-a372-000d3ac2fa68,DevicePath:,},},Config:nil,},} Nov 14 05:00:12.485: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000000 Nov 14 05:00:12.549: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000000 Nov 14 05:00:12.686: INFO: webserver-7c69b6748-98qtk started at 2019-11-14 04:58:01 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container httpd ready: true, restart count 0 Nov 14 05:00:12.686: INFO: pvc-tester-8ptvv started at 2019-11-14 04:59:07 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container write-pod ready: false, restart count 0 Nov 14 05:00:12.686: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:58:58 +0000 UTC (0+3 container statuses recorded) Nov 14 05:00:12.686: INFO: Container hostpath ready: false, restart count 0 Nov 14 05:00:12.686: INFO: Container liveness-probe ready: false, restart count 0 Nov 14 05:00:12.686: INFO: Container node-driver-registrar ready: false, restart count 0 Nov 14 05:00:12.686: INFO: coredns-87f5d796-k7mr9 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container coredns ready: true, restart count 0 Nov 14 05:00:12.686: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:59:43 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container agnhost ready: false, restart count 0 Nov 14 05:00:12.686: INFO: sysctl-8eb313ae-e289-4239-b190-8478c81c01a7 started at 2019-11-14 04:59:33 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container test-container ready: false, restart count 0 Nov 14 05:00:12.686: INFO: pod-4330d15b-11cf-4c7f-bbf1-e7a14f1d8688 started at 2019-11-14 04:59:15 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container test-container ready: false, restart count 0 Nov 14 05:00:12.686: INFO: test-rollover-controller-89wk7 started at 2019-11-14 05:00:02 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container httpd ready: false, restart count 0 Nov 14 05:00:12.686: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 05:00:12.686: INFO: pod-subpath-test-emptydir-7j8m started at 2019-11-14 04:59:38 +0000 UTC (1+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Init container init-volume-emptydir-7j8m ready: false, restart count 0 Nov 14 05:00:12.686: INFO: Container test-container-subpath-emptydir-7j8m ready: false, restart count 0 Nov 14 05:00:12.686: INFO: keyvault-flexvolume-ljqsq started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 05:00:12.686: INFO: kubernetes-dashboard-65966766b9-b8ps7 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 14 05:00:12.686: INFO: webserver-7c69b6748-n6bfb started at 2019-11-14 04:58:01 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container httpd ready: true, restart count 0 Nov 14 05:00:12.686: INFO: webserver-7bd9679d84-2jh6p started at 2019-11-14 04:58:10 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container httpd ready: true, restart count 0 Nov 14 05:00:12.686: INFO: webserver-7bd9679d84-nndg5 started at 2019-11-14 04:58:09 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container httpd ready: true, restart count 0 Nov 14 05:00:12.686: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 05:00:12.686: INFO: pvc-datasource-writer-7rbg4 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container volume-tester ready: false, restart count 0 Nov 14 05:00:12.686: INFO: host-test-container-pod started at 2019-11-14 04:59:11 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container agnhost ready: false, restart count 0 Nov 14 05:00:12.686: INFO: dns-test-843a5223-6303-4b66-9101-667ffc1bd4c5 started at 2019-11-14 04:59:57 +0000 UTC (0+3 container statuses recorded) Nov 14 05:00:12.686: INFO: Container jessie-querier ready: false, restart count 0 Nov 14 05:00:12.686: INFO: Container querier ready: false, restart count 0 Nov 14 05:00:12.686: INFO: Container webserver ready: false, restart count 0 Nov 14 05:00:12.686: INFO: azure-ip-masq-agent-dgg69 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 05:00:12.686: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 05:00:12.686: INFO: redis-slave-68cd9c48b4-glss4 started at 2019-11-14 04:55:39 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container slave ready: false, restart count 0 Nov 14 05:00:12.686: INFO: kube-proxy-cdq9f started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 05:00:12.686: INFO: webserver-7bd9679d84-64xnt started at 2019-11-14 04:59:46 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container httpd ready: false, restart count 0 Nov 14 05:00:12.686: INFO: ss2-1 started at 2019-11-14 04:53:07 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container webserver ready: true, restart count 0 Nov 14 05:00:12.686: INFO: csi-snapshotter-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 05:00:12.686: INFO: configmap-client started at 2019-11-14 04:59:56 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container configmap-client ready: false, restart count 0 Nov 14 05:00:12.686: INFO: ss2-2 started at 2019-11-14 04:59:09 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container webserver ready: false, restart count 0 Nov 14 05:00:12.686: INFO: webserver-7bd9679d84-kmzmc started at 2019-11-14 04:58:58 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container httpd ready: true, restart count 0 Nov 14 05:00:12.686: INFO: frontend-79ff456bff-9d685 started at 2019-11-14 04:55:38 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container php-redis ready: false, restart count 0 Nov 14 05:00:12.686: INFO: netserver-0 started at 2019-11-14 04:55:43 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container webserver ready: true, restart count 0 Nov 14 05:00:12.686: INFO: pod-subpath-test-nfs-dynamicpv-rsx9 started at 2019-11-14 04:59:31 +0000 UTC (0+2 container statuses recorded) Nov 14 05:00:12.686: INFO: Container test-container-subpath-nfs-dynamicpv-rsx9 ready: false, restart count 0 Nov 14 05:00:12.686: INFO: Container test-container-volume-nfs-dynamicpv-rsx9 ready: false, restart count 0 Nov 14 05:00:12.686: INFO: ss2-1 started at 2019-11-14 04:57:14 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container webserver ready: true, restart count 0 Nov 14 05:00:12.686: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 04:59:40 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container agnhost ready: false, restart count 0 Nov 14 05:00:12.686: INFO: test-container-pod started at 2019-11-14 04:59:10 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container webserver ready: false, restart count 0 Nov 14 05:00:12.686: INFO: pod-subpath-test-hostpath-zw7z started at 2019-11-14 04:59:36 +0000 UTC (1+2 container statuses recorded) Nov 14 05:00:12.686: INFO: Init container test-init-subpath-hostpath-zw7z ready: false, restart count 0 Nov 14 05:00:12.686: INFO: Container test-container-subpath-hostpath-zw7z ready: false, restart count 0 Nov 14 05:00:12.686: INFO: Container test-container-volume-hostpath-zw7z ready: false, restart count 0 Nov 14 05:00:12.686: INFO: blobfuse-flexvol-installer-6xhz6 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:12.686: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 05:00:12.686: INFO: dns-test-aec12ec4-b532-496e-bd8e-522c6b47d6de started at 2019-11-14 04:59:59 +0000 UTC (0+3 container statuses recorded) Nov 14 05:00:12.686: INFO: Container jessie-querier ready: false, restart count 0 Nov 14 05:00:12.686: INFO: Container querier ready: false, restart count 0 Nov 14 05:00:12.686: INFO: Container webserver ready: false, restart count 0 W1114 05:00:12.745209 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:00:17.741: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000000 Nov 14 05:00:17.742: INFO: Logging node info for node k8s-agentpool-23171212-vmss000001 Nov 14 05:00:17.835: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000001 /api/v1/nodes/k8s-agentpool-23171212-vmss000001 e9c1f552-b95b-4548-9ecd-37a7f1925e75 35391 0 2019-11-14 04:40:09 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-6971":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3033":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3310":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-4400":"k8s-agentpool-23171212-vmss000001","csi-hostpath-volume-expand-2485":"k8s-agentpool-23171212-vmss000001","csi-mock-csi-mock-volumes-3324":"csi-mock-csi-mock-volumes-3324","csi-mock-csi-mock-volumes-3770":"csi-mock-csi-mock-volumes-3770","csi-mock-csi-mock-volumes-7845":"csi-mock-csi-mock-volumes-7845","csi-mock-csi-mock-volumes-9859":"csi-mock-csi-mock-volumes-9859"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},example.com/fakecpu: {{800 0} {<nil>} 800 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},example.com/fakecpu: {{800 0} {<nil>} 800 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 05:00:04 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 05:00:04 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 05:00:04 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 05:00:04 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.5,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:639707efd7a74ac4bca6a608e99a6715,SystemUUID:CACA620B-0C7C-7040-A716-91F766CA5A2F,BootID:9fabe02f-4e56-4162-b5c5-2e2733911b4f,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0],SizeBytes:71821984,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 k8s.gcr.io/metrics-server-amd64:v0.2.1],SizeBytes:42541759,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[kubernetes.io/csi/csi-mock-csi-mock-volumes-7845^4],VolumesAttached:[]AttachedVolume{AttachedVolume{Name:kubernetes.io/csi/csi-mock-csi-mock-volumes-7845^4,DevicePath:,},},Config:nil,},} Nov 14 05:00:17.835: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000001 Nov 14 05:00:17.905: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000001 Nov 14 05:00:18.100: INFO: kube-proxy-ng7z8 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 05:00:18.100: INFO: rs-pod1-wv2wp started at 2019-11-14 04:59:21 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container pod1 ready: false, restart count 0 Nov 14 05:00:18.100: INFO: frontend-79ff456bff-5dq96 started at 2019-11-14 04:55:39 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container php-redis ready: true, restart count 0 Nov 14 05:00:18.100: INFO: ss2-2 started at 2019-11-14 04:58:18 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container webserver ready: true, restart count 0 Nov 14 05:00:18.100: INFO: nfs-server started at 2019-11-14 04:56:19 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container nfs-server ready: true, restart count 0 Nov 14 05:00:18.100: INFO: webserver-7bd9679d84-cdtsh started at 2019-11-14 04:58:47 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container httpd ready: false, restart count 0 Nov 14 05:00:18.100: INFO: csi-mockplugin-resizer-0 started at 2019-11-14 04:58:00 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 05:00:18.100: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:59:46 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container csi-resizer ready: false, restart count 0 Nov 14 05:00:18.100: INFO: csi-snapshotter-0 started at 2019-11-14 04:59:47 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container csi-snapshotter ready: false, restart count 0 Nov 14 05:00:18.100: INFO: tiller-deploy-7559b6b885-vkxml started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container tiller ready: true, restart count 0 Nov 14 05:00:18.100: INFO: pod-handle-http-request started at 2019-11-14 04:59:13 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container pod-handle-http-request ready: false, restart count 0 Nov 14 05:00:18.100: INFO: external-provisioner-mc5xj started at 2019-11-14 04:58:39 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container nfs-provisioner ready: false, restart count 0 Nov 14 05:00:18.100: INFO: csi-mockplugin-0 started at 2019-11-14 04:57:57 +0000 UTC (0+3 container statuses recorded) Nov 14 05:00:18.100: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 05:00:18.100: INFO: Container driver-registrar ready: true, restart count 0 Nov 14 05:00:18.100: INFO: Container mock ready: true, restart count 0 Nov 14 05:00:18.100: INFO: frontend-79ff456bff-s8p95 started at 2019-11-14 04:55:40 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container php-redis ready: true, restart count 0 Nov 14 05:00:18.100: INFO: netserver-1 started at 2019-11-14 04:55:44 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container webserver ready: true, restart count 0 Nov 14 05:00:18.100: INFO: ss2-0 started at 2019-11-14 04:56:54 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container webserver ready: true, restart count 0 Nov 14 05:00:18.100: INFO: rs-pod1-nkr8s started at 2019-11-14 04:59:15 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container pod1 ready: true, restart count 0 Nov 14 05:00:18.100: INFO: rs-pod1-7zlhd started at 2019-11-14 04:59:18 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container pod1 ready: false, restart count 0 Nov 14 05:00:18.100: INFO: rs-pod1-67w6l started at 2019-11-14 04:59:22 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container pod1 ready: false, restart count 0 Nov 14 05:00:18.100: INFO: redis-slave-68cd9c48b4-pxnkq started at 2019-11-14 04:55:42 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container slave ready: true, restart count 0 Nov 14 05:00:18.100: INFO: ss2-0 started at 2019-11-14 04:54:55 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container webserver ready: true, restart count 0 Nov 14 05:00:18.100: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:53:08 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container agnhost ready: true, restart count 0 Nov 14 05:00:18.100: INFO: rs-pod1-xg7b6 started at 2019-11-14 04:59:17 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container pod1 ready: true, restart count 0 Nov 14 05:00:18.100: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:59:44 +0000 UTC (0+3 container statuses recorded) Nov 14 05:00:18.100: INFO: Container hostpath ready: false, restart count 0 Nov 14 05:00:18.100: INFO: Container liveness-probe ready: false, restart count 0 Nov 14 05:00:18.100: INFO: Container node-driver-registrar ready: false, restart count 0 Nov 14 05:00:18.100: INFO: azure-ip-masq-agent-mcg7w started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 05:00:18.100: INFO: metrics-server-58ff8c5ddf-h7jqs started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container metrics-server ready: true, restart count 0 Nov 14 05:00:18.100: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:59:41 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container csi-attacher ready: false, restart count 0 Nov 14 05:00:18.100: INFO: local-client started at 2019-11-14 04:57:18 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container local-client ready: true, restart count 0 Nov 14 05:00:18.100: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:59:45 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container csi-provisioner ready: false, restart count 0 Nov 14 05:00:18.100: INFO: blobfuse-flexvol-installer-ktdjj started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 05:00:18.100: INFO: pod-subpath-test-emptydir-xppt started at 2019-11-14 04:57:24 +0000 UTC (2+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Init container init-volume-emptydir-xppt ready: true, restart count 0 Nov 14 05:00:18.100: INFO: Init container test-init-volume-emptydir-xppt ready: false, restart count 0 Nov 14 05:00:18.100: INFO: Container test-container-subpath-emptydir-xppt ready: false, restart count 0 Nov 14 05:00:18.100: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:55:51 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container agnhost ready: true, restart count 0 Nov 14 05:00:18.100: INFO: webserver-7bd9679d84-4b7f8 started at 2019-11-14 04:58:06 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container httpd ready: true, restart count 0 Nov 14 05:00:18.100: INFO: external-provisioner-7pj8z started at 2019-11-14 04:55:50 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container nfs-provisioner ready: true, restart count 0 Nov 14 05:00:18.100: INFO: webserver-7bd9679d84-wq7sb started at 2019-11-14 04:59:50 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container httpd ready: false, restart count 0 Nov 14 05:00:18.100: INFO: keyvault-flexvolume-2g62m started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 05:00:18.100: INFO: redis-master-6ff87f4db7-lf6hr started at 2019-11-14 04:55:41 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container master ready: true, restart count 0 Nov 14 05:00:18.100: INFO: var-expansion-9ffd9011-059a-4181-a993-56638aeb87e4 started at 2019-11-14 04:58:37 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container dapi-container ready: false, restart count 0 Nov 14 05:00:18.100: INFO: pvc-volume-tester-r6gmn started at 2019-11-14 04:59:59 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container volume-tester ready: false, restart count 0 Nov 14 05:00:18.100: INFO: pod-submit-remove-950f11c5-b5a7-400b-800b-24c5377040ef started at 2019-11-14 04:56:05 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container nginx ready: true, restart count 0 Nov 14 05:00:18.100: INFO: csi-mockplugin-attacher-0 started at 2019-11-14 04:57:59 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 05:00:18.100: INFO: pod-subpath-test-local-preprovisionedpv-mqm9 started at 2019-11-14 04:59:48 +0000 UTC (0+2 container statuses recorded) Nov 14 05:00:18.100: INFO: Container test-container-subpath-local-preprovisionedpv-mqm9 ready: false, restart count 0 Nov 14 05:00:18.100: INFO: Container test-container-volume-local-preprovisionedpv-mqm9 ready: false, restart count 0 Nov 14 05:00:18.100: INFO: pod-subpath-test-hostpathsymlink-xjdx started at 2019-11-14 04:59:12 +0000 UTC (2+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Init container init-volume-hostpathsymlink-xjdx ready: false, restart count 0 Nov 14 05:00:18.100: INFO: Init container test-init-volume-hostpathsymlink-xjdx ready: false, restart count 0 Nov 14 05:00:18.100: INFO: Container test-container-subpath-hostpathsymlink-xjdx ready: false, restart count 0 Nov 14 05:00:18.100: INFO: busybox-readonly-fs67cd44c6-d7ea-4df7-88d1-daea547ed81a started at 2019-11-14 04:57:39 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.100: INFO: Container busybox-readonly-fs67cd44c6-d7ea-4df7-88d1-daea547ed81a ready: true, restart count 0 W1114 05:00:18.157810 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:00:18.775: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000001 Nov 14 05:00:18.775: INFO: Logging node info for node k8s-master-23171212-vmss000000 Nov 14 05:00:18.832: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000000 /api/v1/nodes/k8s-master-23171212-vmss000000 6c9bb7ee-6dcf-4c6d-a8ad-0377f76a60f6 35308 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:59:56 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.4,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:813714caae2d48f4a9036e17505029ae,SystemUUID:A7C76EFE-4E2A-8042-A754-6642A667D859,BootID:245ff6cc-bfb4-4487-ac55-fb3813c9167c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:00:18.832: INFO: Logging kubelet events for node k8s-master-23171212-vmss000000 Nov 14 05:00:18.891: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000000 Nov 14 05:00:18.971: INFO: kube-proxy-cpnbb started at 2019-11-14 04:40:28 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.972: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 05:00:18.972: INFO: kube-scheduler-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.972: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 05:00:18.972: INFO: cloud-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.972: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 05:00:18.972: INFO: kube-addon-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.972: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 05:00:18.972: INFO: kube-apiserver-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.972: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 05:00:18.972: INFO: kube-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.972: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 05:00:18.972: INFO: azure-ip-masq-agent-q7rgb started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:18.972: INFO: Container azure-ip-masq-agent ready: true, restart count 0 W1114 05:00:19.030550 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:00:19.158: INFO: Latency metrics for node k8s-master-23171212-vmss000000 Nov 14 05:00:19.158: INFO: Logging node info for node k8s-master-23171212-vmss000001 Nov 14 05:00:19.214: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000001 /api/v1/nodes/k8s-master-23171212-vmss000001 202620f8-2cc3-4eb6-b880-ef6d6d9fbccd 35328 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.5.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:58 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:58 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:58 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:59:58 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.5,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cafe5635afe4ac8baa078419003bc32,SystemUUID:88981890-9531-334C-9D46-A02D5E4BD18D,BootID:6accdcbe-b0af-4be0-8f82-19833a9a5e2e,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:00:19.214: INFO: Logging kubelet events for node k8s-master-23171212-vmss000001 Nov 14 05:00:19.278: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000001 Nov 14 05:00:19.363: INFO: kube-proxy-srv2s started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.363: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 05:00:19.363: INFO: kube-scheduler-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.363: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 05:00:19.363: INFO: cloud-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.363: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 05:00:19.363: INFO: kube-addon-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.363: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 05:00:19.363: INFO: kube-apiserver-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.363: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 05:00:19.363: INFO: kube-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.363: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 05:00:19.363: INFO: azure-ip-masq-agent-dnl49 started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.363: INFO: Container azure-ip-masq-agent ready: true, restart count 0 W1114 05:00:19.422894 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:00:19.553: INFO: Latency metrics for node k8s-master-23171212-vmss000001 Nov 14 05:00:19.553: INFO: Logging node info for node k8s-master-23171212-vmss000002 Nov 14 05:00:19.610: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000002 /api/v1/nodes/k8s-master-23171212-vmss000002 8eca3a9a-6fd5-4796-82bb-2f37c6fc30b7 34673 0 2019-11-14 04:41:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000002 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.6.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/2,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.6.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284883456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498451456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:41:18 +0000 UTC,LastTransitionTime:2019-11-14 04:41:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:27 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:27 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:27 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:59:27 +0000 UTC,LastTransitionTime:2019-11-14 04:41:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000002,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eb5abe50949445b79334d994c94314f8,SystemUUID:E11F8710-4785-DA42-B98E-8E97145F92C7,BootID:8fe9e9b2-2b16-4895-91c7-dc676b577942,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:00:19.610: INFO: Logging kubelet events for node k8s-master-23171212-vmss000002 Nov 14 05:00:19.672: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000002 Nov 14 05:00:19.753: INFO: kube-scheduler-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.753: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 05:00:19.753: INFO: cloud-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:53 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.753: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 05:00:19.753: INFO: azure-ip-masq-agent-mw27f started at 2019-11-14 04:41:05 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.753: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 05:00:19.753: INFO: kube-proxy-4vs6q started at 2019-11-14 04:41:06 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.753: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 05:00:19.753: INFO: kube-addon-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.753: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 05:00:19.753: INFO: kube-apiserver-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.753: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 05:00:19.753: INFO: kube-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:19.753: INFO: Container kube-controller-manager ready: true, restart count 0 W1114 05:00:19.817227 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:00:19.947: INFO: Latency metrics for node k8s-master-23171212-vmss000002 Nov 14 05:00:19.947: INFO: Logging node info for node k8s-master-23171212-vmss000003 Nov 14 05:00:20.004: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000003 /api/v1/nodes/k8s-master-23171212-vmss000003 b1a400e7-f6ff-4241-9175-cd8bd70dd11a 35311 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000003 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/3,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:59:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:59:56 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.7,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000003,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:effe7f682034467995d1db3ee85a4a38,SystemUUID:2073A143-352C-D241-B189-4A1DCC64C62C,BootID:6c95e89b-c056-494f-b817-6494fc9fd635,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:00:20.004: INFO: Logging kubelet events for node k8s-master-23171212-vmss000003 Nov 14 05:00:20.065: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000003 Nov 14 05:00:20.148: INFO: cloud-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.148: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 05:00:20.148: INFO: kube-addon-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.148: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 05:00:20.148: INFO: kube-apiserver-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.148: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 05:00:20.148: INFO: kube-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.148: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 05:00:20.148: INFO: kube-scheduler-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.148: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 05:00:20.148: INFO: azure-ip-masq-agent-4s5bk started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.148: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 05:00:20.148: INFO: kube-proxy-hrqtx started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.148: INFO: Container kube-proxy ready: true, restart count 0 W1114 05:00:20.208532 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:00:20.342: INFO: Latency metrics for node k8s-master-23171212-vmss000003 Nov 14 05:00:20.342: INFO: Logging node info for node k8s-master-23171212-vmss000004 Nov 14 05:00:20.399: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000004 /api/v1/nodes/k8s-master-23171212-vmss000004 25a9993c-54fa-45cc-9da7-66c66cafa30f 35369 0 2019-11-14 04:40:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000004 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/4,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 05:00:02 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 05:00:02 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 05:00:02 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 05:00:02 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.8,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000004,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ab6b205a70ea45b1b28b801e68a4ba84,SystemUUID:65406178-5013-644C-AD46-D7BC6F0DD7BF,BootID:e6b05928-9970-49a5-bd51-149982b32750,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:00:20.399: INFO: Logging kubelet events for node k8s-master-23171212-vmss000004 Nov 14 05:00:20.460: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000004 Nov 14 05:00:20.541: INFO: kube-scheduler-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.541: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 05:00:20.541: INFO: cloud-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.541: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 05:00:20.541: INFO: kube-addon-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.541: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 05:00:20.541: INFO: kube-apiserver-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.541: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 05:00:20.541: INFO: kube-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.541: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 05:00:20.541: INFO: azure-ip-masq-agent-47pzk started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.541: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 05:00:20.541: INFO: kube-proxy-47vmd started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:00:20.541: INFO: Container kube-proxy ready: true, restart count 0 W1114 05:00:20.599075 92588 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:00:20.740: INFO: Latency metrics for node k8s-master-23171212-vmss000004 Nov 14 05:00:20.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "sched-preemption-path-8399" for this suite. Nov 14 05:01:54.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 14 05:01:56.812: INFO: namespace sched-preemption-path-8399 deletion completed in 1m36.014581122s [AfterEach] [sig-scheduling] PreemptionExecutionPath test/e2e/scheduling/preemption.go:274 Nov 14 05:01:56.868: INFO: List existing priorities: Nov 14 05:01:56.868: INFO: p1/1 created at 2019-11-14 04:59:11 +0000 UTC Nov 14 05:01:56.868: INFO: p2/2 created at 2019-11-14 04:59:11 +0000 UTC Nov 14 05:01:56.868: INFO: p3/3 created at 2019-11-14 04:59:11 +0000 UTC Nov 14 05:01:56.868: INFO: p4/4 created at 2019-11-14 04:59:11 +0000 UTC Nov 14 05:01:56.868: INFO: system-cluster-critical/2000000000 created at 2019-11-14 04:40:04 +0000 UTC Nov 14 05:01:56.868: INFO: system-node-critical/2000001000 created at 2019-11-14 04:40:04 +0000 UTC
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\sbe\sable\sto\sunmount\safter\sthe\ssubpath\sdirectory\sis\sdeleted$'
test/e2e/storage/testsuites/subpath.go:425 Nov 14 04:58:40.003: Unexpected error: <*errors.errorString | 0xc002ce7420>: { s: "PersistentVolumeClaims [csi-hostpathn7ls6] not all in phase Bound within 5m0s", } PersistentVolumeClaims [csi-hostpathn7ls6] not all in phase Bound within 5m0s occurred test/e2e/storage/testsuites/base.go:366from junit_10.xml
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 14 04:53:35.411: INFO: >>> kubeConfig: /workspace/aks287781815/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename provisioning �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-6474 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should be able to unmount after the subpath directory is deleted test/e2e/storage/testsuites/subpath.go:425 �[1mSTEP�[0m: deploying csi-hostpath driver Nov 14 04:53:35.951: INFO: creating *v1.ServiceAccount: provisioning-6474/csi-attacher Nov 14 04:53:36.006: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-6474 Nov 14 04:53:36.006: INFO: Define cluster role external-attacher-runner-provisioning-6474 Nov 14 04:53:36.061: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-6474 Nov 14 04:53:36.119: INFO: creating *v1.Role: provisioning-6474/external-attacher-cfg-provisioning-6474 Nov 14 04:53:36.180: INFO: creating *v1.RoleBinding: provisioning-6474/csi-attacher-role-cfg Nov 14 04:53:36.238: INFO: creating *v1.ServiceAccount: provisioning-6474/csi-provisioner Nov 14 04:53:36.295: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-6474 Nov 14 04:53:36.295: INFO: Define cluster role external-provisioner-runner-provisioning-6474 Nov 14 04:53:36.361: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-6474 Nov 14 04:53:36.434: INFO: creating *v1.Role: provisioning-6474/external-provisioner-cfg-provisioning-6474 Nov 14 04:53:36.499: INFO: creating *v1.RoleBinding: provisioning-6474/csi-provisioner-role-cfg Nov 14 04:53:36.559: INFO: creating *v1.ServiceAccount: provisioning-6474/csi-snapshotter Nov 14 04:53:36.615: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-6474 Nov 14 04:53:36.615: INFO: Define cluster role external-snapshotter-runner-provisioning-6474 Nov 14 04:53:36.670: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-6474 Nov 14 04:53:36.735: INFO: creating *v1.Role: provisioning-6474/external-snapshotter-leaderelection-provisioning-6474 Nov 14 04:53:36.794: INFO: creating *v1.RoleBinding: provisioning-6474/external-snapshotter-leaderelection Nov 14 04:53:36.851: INFO: creating *v1.ServiceAccount: provisioning-6474/csi-resizer Nov 14 04:53:36.908: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-6474 Nov 14 04:53:36.908: INFO: Define cluster role external-resizer-runner-provisioning-6474 Nov 14 04:53:36.963: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-6474 Nov 14 04:53:37.020: INFO: creating *v1.Role: provisioning-6474/external-resizer-cfg-provisioning-6474 Nov 14 04:53:37.082: INFO: creating *v1.RoleBinding: provisioning-6474/csi-resizer-role-cfg Nov 14 04:53:37.138: INFO: creating *v1.Service: provisioning-6474/csi-hostpath-attacher Nov 14 04:53:37.208: INFO: creating *v1.StatefulSet: provisioning-6474/csi-hostpath-attacher Nov 14 04:53:37.266: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-6474 Nov 14 04:53:37.328: INFO: creating *v1.Service: provisioning-6474/csi-hostpathplugin Nov 14 04:53:37.401: INFO: creating *v1.StatefulSet: provisioning-6474/csi-hostpathplugin Nov 14 04:53:37.466: INFO: creating *v1.Service: provisioning-6474/csi-hostpath-provisioner Nov 14 04:53:37.542: INFO: creating *v1.StatefulSet: provisioning-6474/csi-hostpath-provisioner Nov 14 04:53:37.603: INFO: creating *v1.Service: provisioning-6474/csi-hostpath-resizer Nov 14 04:53:37.691: INFO: creating *v1.StatefulSet: provisioning-6474/csi-hostpath-resizer Nov 14 04:53:37.754: INFO: creating *v1.Service: provisioning-6474/csi-snapshotter Nov 14 04:53:37.885: INFO: creating *v1.StatefulSet: provisioning-6474/csi-snapshotter Nov 14 04:53:37.959: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-6474 Nov 14 04:53:38.024: INFO: Test running for native CSI Driver, not checking metrics Nov 14 04:53:38.024: INFO: Creating resource for dynamic PV �[1mSTEP�[0m: creating a StorageClass provisioning-6474-csi-hostpath-provisioning-6474-sc2dhr9 �[1mSTEP�[0m: creating a claim Nov 14 04:53:38.086: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 14 04:53:38.148: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathn7ls6] to have phase Bound Nov 14 04:53:38.199: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:53:40.255: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:53:42.308: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:53:44.364: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:53:46.419: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:53:48.473: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:53:50.532: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:53:52.586: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:53:54.641: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:53:56.693: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:53:58.746: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:00.799: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:02.851: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:04.903: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:06.956: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:09.008: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:11.061: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:13.113: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:15.170: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:17.222: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:19.275: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:21.328: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:23.388: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:25.439: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:27.491: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:29.544: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:31.600: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:33.652: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:35.703: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:37.761: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:39.813: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:41.872: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:43.928: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:45.980: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:48.033: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:50.085: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:52.137: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:54.189: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:56.241: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:54:58.293: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:00.345: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:02.399: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:04.452: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:06.505: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:08.561: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:10.613: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:12.666: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:14.719: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:16.772: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:18.824: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:20.877: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:22.938: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:24.991: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:27.044: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:29.096: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:31.157: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:33.210: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:35.264: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:37.316: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:39.368: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:41.420: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:43.472: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:45.529: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:47.581: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:49.632: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:51.687: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:53.740: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:55.795: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:57.847: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:55:59.900: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:01.955: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:04.008: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:06.060: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:08.127: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:10.179: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:12.234: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:14.286: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:16.338: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:18.391: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:20.442: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:22.495: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:24.546: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:26.598: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:28.651: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:30.703: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:32.755: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:34.807: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:36.859: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:38.911: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:40.964: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:43.017: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:45.069: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:47.121: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:49.173: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:51.226: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:53.281: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:55.333: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:57.385: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:56:59.438: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:01.490: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:03.543: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:05.597: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:07.650: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:09.702: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:11.753: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:13.819: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:15.871: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:17.924: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:19.976: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:22.028: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:24.081: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:26.133: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:28.185: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:30.238: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:32.290: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:34.343: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:36.397: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:38.449: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:40.502: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:42.555: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:44.607: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:46.660: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:48.713: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:50.777: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:52.830: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:54.882: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:56.937: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:57:58.989: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:01.043: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:03.097: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:05.152: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:07.204: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:09.257: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:11.310: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:13.362: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:15.415: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:17.467: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:19.520: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:21.574: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:23.626: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:25.679: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:27.731: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:29.789: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:31.843: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:33.897: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:35.950: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:38.003: INFO: PersistentVolumeClaim csi-hostpathn7ls6 found but phase is Pending instead of Bound. Nov 14 04:58:40.003: FAIL: Unexpected error: <*errors.errorString | 0xc002ce7420>: { s: "PersistentVolumeClaims [csi-hostpathn7ls6] not all in phase Bound within 5m0s", } PersistentVolumeClaims [csi-hostpathn7ls6] not all in phase Bound within 5m0s occurred [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "provisioning-6474". �[1mSTEP�[0m: Found 51 events. Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:37 +0000 UTC - event for csi-hostpath-attacher: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:37 +0000 UTC - event for csi-hostpath-provisioner: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:37 +0000 UTC - event for csi-hostpath-resizer: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:37 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:37 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:37 +0000 UTC - event for csi-snapshotter: {statefulset-controller } SuccessfulCreate: create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:38 +0000 UTC - event for csi-hostpathn7ls6: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "csi-hostpath-provisioning-6474" or manually created by system administrator Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:40 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet k8s-agentpool-23171212-vmss000000} Pulling: Pulling image "quay.io/k8scsi/csi-attacher:v1.2.0" Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:41 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet k8s-agentpool-23171212-vmss000000} Pulled: Successfully pulled image "quay.io/k8scsi/csi-attacher:v1.2.0" Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:41 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet k8s-agentpool-23171212-vmss000000} Created: Created container csi-attacher Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:41 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet k8s-agentpool-23171212-vmss000000} Pulling: Pulling image "quay.io/k8scsi/csi-provisioner:v1.4.0-rc1" Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:41 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet k8s-agentpool-23171212-vmss000000} Pulling: Pulling image "quay.io/k8scsi/csi-resizer:v0.2.0" Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:41 +0000 UTC - event for csi-snapshotter-0: {kubelet k8s-agentpool-23171212-vmss000000} Pulling: Pulling image "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1" Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:42 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet k8s-agentpool-23171212-vmss000000} Started: Started container csi-attacher Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:43 +0000 UTC - event for csi-snapshotter-0: {kubelet k8s-agentpool-23171212-vmss000000} Pulled: Successfully pulled image "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1" Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:44 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet k8s-agentpool-23171212-vmss000000} Pulled: Successfully pulled image "quay.io/k8scsi/csi-resizer:v0.2.0" Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:44 +0000 UTC - event for csi-snapshotter-0: {kubelet k8s-agentpool-23171212-vmss000000} Created: Created container csi-snapshotter Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:45 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet k8s-agentpool-23171212-vmss000000} Pulled: Successfully pulled image "quay.io/k8scsi/csi-provisioner:v1.4.0-rc1" Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:45 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet k8s-agentpool-23171212-vmss000000} Started: Started container csi-resizer Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:45 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet k8s-agentpool-23171212-vmss000000} Created: Created container csi-resizer Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:45 +0000 UTC - event for csi-snapshotter-0: {kubelet k8s-agentpool-23171212-vmss000000} Started: Started container csi-snapshotter Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:46 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet k8s-agentpool-23171212-vmss000000} Started: Started container csi-provisioner Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:46 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet k8s-agentpool-23171212-vmss000000} Created: Created container csi-provisioner Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:57 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } RecreatingFailedPod: StatefulSet provisioning-6474/csi-hostpathplugin is recreating failed Pod csi-hostpathplugin-0 Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:57 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulDelete: delete Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful Nov 14 04:58:40.110: INFO: At 2019-11-14 04:53:57 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:54:04 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } FailedCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin failed error: The POST operation against Pod could not be completed at this time, please try again. Nov 14 04:58:40.110: INFO: At 2019-11-14 04:54:04 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:54:14 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:54:21 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:54:33 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:54:43 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:54:54 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:55:04 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:55:15 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:55:26 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:55:34 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:55:46 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:56:04 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:56:13 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:56:40 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:56:54 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:57:09 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:57:14 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:57:34 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:57:44 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:57:50 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:58:04 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:58:15 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:58:23 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.110: INFO: At 2019-11-14 04:58:34 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000000} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 04:58:40.164: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 04:58:40.164: INFO: csi-hostpath-attacher-0 k8s-agentpool-23171212-vmss000000 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:37 +0000 UTC }] Nov 14 04:58:40.164: INFO: csi-hostpath-provisioner-0 k8s-agentpool-23171212-vmss000000 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:37 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:37 +0000 UTC }] Nov 14 04:58:40.164: INFO: csi-hostpath-resizer-0 k8s-agentpool-23171212-vmss000000 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:38 +0000 UTC }] Nov 14 04:58:40.164: INFO: csi-hostpathplugin-0 k8s-agentpool-23171212-vmss000000 Pending [] Nov 14 04:58:40.164: INFO: csi-snapshotter-0 k8s-agentpool-23171212-vmss000000 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:46 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 04:53:38 +0000 UTC }] Nov 14 04:58:40.164: INFO: Nov 14 04:58:40.276: INFO: Logging node info for node k8s-agentpool-23171212-vmss000000 Nov 14 04:58:40.328: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000000 /api/v1/nodes/k8s-agentpool-23171212-vmss000000 0f3bbebc-9d46-4ddd-a1dc-c93db8b52883 33115 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-provisioning-2202":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-6454":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-8364":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-8403":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-1206":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-2585":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-5498":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-6633":"k8s-agentpool-23171212-vmss000000","csi-mock-csi-mock-volumes-4558":"csi-mock-csi-mock-volumes-4558","csi-mock-csi-mock-volumes-6397":"csi-mock-csi-mock-volumes-6397","csi-mock-csi-mock-volumes-7486":"csi-mock-csi-mock-volumes-7486","csi-mock-csi-mock-volumes-7581":"csi-mock-csi-mock-volumes-7581","csi-mock-csi-mock-volumes-8512":"csi-mock-csi-mock-volumes-8512","csi-mock-csi-mock-volumes-9601":"csi-mock-csi-mock-volumes-9601"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:58:08 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:58:08 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:58:08 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:58:08 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.4,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:359d6aea81114a07a8070169aad06c4a,SystemUUID:A77EC1C1-102D-514B-A3FC-E5E916EF17BD,BootID:fc99ebb5-9bcd-41e5-aad2-849e47da2eea,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c k8s.gcr.io/coredns:1.5.0],SizeBytes:42488424,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:58:40.329: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000000 Nov 14 04:58:40.384: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000000 Nov 14 04:58:40.523: INFO: webserver-7c69b6748-n6bfb started at 2019-11-14 04:58:01 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container httpd ready: true, restart count 0 Nov 14 04:58:40.523: INFO: webserver-7bd9679d84-2jh6p started at 2019-11-14 04:58:10 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container httpd ready: true, restart count 0 Nov 14 04:58:40.523: INFO: webserver-7bd9679d84-nndg5 started at 2019-11-14 04:58:09 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container httpd ready: true, restart count 0 Nov 14 04:58:40.523: INFO: csi-snapshotter-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-snapshotter ready: false, restart count 0 Nov 14 04:58:40.523: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:58:40.523: INFO: pvc-datasource-writer-7rbg4 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container volume-tester ready: false, restart count 0 Nov 14 04:58:40.523: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:51:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-attacher ready: false, restart count 0 Nov 14 04:58:40.523: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:58:40.523: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:51:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-resizer ready: false, restart count 0 Nov 14 04:58:40.523: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:53:37 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:58:40.523: INFO: azure-ip-masq-agent-dgg69 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:58:40.523: INFO: csi-hostpath-attacher-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 04:58:40.523: INFO: redis-slave-68cd9c48b4-glss4 started at 2019-11-14 04:55:39 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container slave ready: true, restart count 0 Nov 14 04:58:40.523: INFO: webserver-deployment-595b5b9587-vkppd started at 2019-11-14 04:56:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container httpd ready: true, restart count 0 Nov 14 04:58:40.523: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:58:35 +0000 UTC (0+0 container statuses recorded) Nov 14 04:58:40.523: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:55:47 +0000 UTC (0+3 container statuses recorded) Nov 14 04:58:40.523: INFO: Container hostpath ready: false, restart count 0 Nov 14 04:58:40.523: INFO: Container liveness-probe ready: false, restart count 0 Nov 14 04:58:40.523: INFO: Container node-driver-registrar ready: false, restart count 0 Nov 14 04:58:40.523: INFO: webserver-79f599c558-2q9b6 started at 2019-11-14 04:57:40 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container httpd ready: true, restart count 0 Nov 14 04:58:40.523: INFO: kube-proxy-cdq9f started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:58:40.523: INFO: csi-snapshotter-0 started at 2019-11-14 04:53:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:58:40.523: INFO: ss2-1 started at 2019-11-14 04:53:07 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container webserver ready: true, restart count 0 Nov 14 04:58:40.523: INFO: csi-snapshotter-0 started at 2019-11-14 04:50:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 04:58:40.523: INFO: webserver-deployment-595b5b9587-tv8l8 started at 2019-11-14 04:56:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container httpd ready: true, restart count 0 Nov 14 04:58:40.523: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:53:37 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:58:40.523: INFO: pod-0 started at 2019-11-14 04:56:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container busybox ready: false, restart count 0 Nov 14 04:58:40.523: INFO: csi-hostpathplugin-0 started at 2019-11-14 04:58:34 +0000 UTC (0+0 container statuses recorded) Nov 14 04:58:40.523: INFO: frontend-79ff456bff-9d685 started at 2019-11-14 04:55:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container php-redis ready: true, restart count 0 Nov 14 04:58:40.523: INFO: netserver-0 started at 2019-11-14 04:55:43 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container webserver ready: true, restart count 0 Nov 14 04:58:40.523: INFO: ss2-1 started at 2019-11-14 04:57:14 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container webserver ready: true, restart count 0 Nov 14 04:58:40.523: INFO: blobfuse-flexvol-installer-6xhz6 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:58:40.523: INFO: webserver-deployment-595b5b9587-djv42 started at 2019-11-14 04:56:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container httpd ready: true, restart count 0 Nov 14 04:58:40.523: INFO: webserver-7c69b6748-98qtk started at 2019-11-14 04:58:01 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container httpd ready: true, restart count 0 Nov 14 04:58:40.523: INFO: csi-hostpath-resizer-0 started at 2019-11-14 04:53:38 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 04:58:40.523: INFO: pod-secrets-a2448efc-78fc-4e2d-8cd5-dce25428dfca started at 2019-11-14 04:58:07 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container secret-volume-test ready: false, restart count 0 Nov 14 04:58:40.523: INFO: coredns-87f5d796-k7mr9 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container coredns ready: true, restart count 0 Nov 14 04:58:40.523: INFO: webserver-deployment-595b5b9587-r5wzq started at 2019-11-14 04:56:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container httpd ready: true, restart count 0 Nov 14 04:58:40.523: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 04:50:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 04:58:40.523: INFO: webserver-deployment-595b5b9587-xm4p8 started at 2019-11-14 04:56:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container httpd ready: true, restart count 0 Nov 14 04:58:40.523: INFO: keyvault-flexvolume-ljqsq started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:58:40.523: INFO: kubernetes-dashboard-65966766b9-b8ps7 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:40.523: INFO: Container kubernetes-dashboard ready: true, restart count 0 W1114 04:58:40.582086 92619 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:58:41.147: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000000 Nov 14 04:58:41.147: INFO: Logging node info for node k8s-agentpool-23171212-vmss000001 Nov 14 04:58:41.204: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000001 /api/v1/nodes/k8s-agentpool-23171212-vmss000001 e9c1f552-b95b-4548-9ecd-37a7f1925e75 32660 0 2019-11-14 04:40:09 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-6971":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3033":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3310":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-4400":"k8s-agentpool-23171212-vmss000001","csi-hostpath-volume-expand-2485":"k8s-agentpool-23171212-vmss000001","csi-mock-csi-mock-volumes-3324":"csi-mock-csi-mock-volumes-3324","csi-mock-csi-mock-volumes-3770":"csi-mock-csi-mock-volumes-3770","csi-mock-csi-mock-volumes-9859":"csi-mock-csi-mock-volumes-9859"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:44 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:44 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:44 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:57:44 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.5,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:639707efd7a74ac4bca6a608e99a6715,SystemUUID:CACA620B-0C7C-7040-A716-91F766CA5A2F,BootID:9fabe02f-4e56-4162-b5c5-2e2733911b4f,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0],SizeBytes:71821984,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 k8s.gcr.io/metrics-server-amd64:v0.2.1],SizeBytes:42541759,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:58:41.205: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000001 Nov 14 04:58:41.260: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000001 Nov 14 04:58:41.374: INFO: frontend-79ff456bff-5dq96 started at 2019-11-14 04:55:39 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container php-redis ready: false, restart count 0 Nov 14 04:58:41.374: INFO: kube-proxy-ng7z8 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:58:41.374: INFO: redis-slave-68cd9c48b4-pxnkq started at 2019-11-14 04:55:42 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container slave ready: false, restart count 0 Nov 14 04:58:41.374: INFO: termination-message-containerc51b5896-be32-487f-bd02-2dc2a1b418e4 started at 2019-11-14 04:55:55 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container termination-message-container ready: false, restart count 0 Nov 14 04:58:41.374: INFO: sample-webhook-deployment-86d95b659d-mh976 started at 2019-11-14 04:56:55 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container sample-webhook ready: false, restart count 0 Nov 14 04:58:41.374: INFO: webserver-deployment-595b5b9587-g89fb started at 2019-11-14 04:57:02 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container httpd ready: false, restart count 0 Nov 14 04:58:41.374: INFO: ss2-0 started at 2019-11-14 04:54:55 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container webserver ready: true, restart count 0 Nov 14 04:58:41.374: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:53:08 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container agnhost ready: true, restart count 0 Nov 14 04:58:41.374: INFO: ss2-2 started at 2019-11-14 04:58:18 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container webserver ready: false, restart count 0 Nov 14 04:58:41.374: INFO: nfs-server started at 2019-11-14 04:56:19 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container nfs-server ready: false, restart count 0 Nov 14 04:58:41.374: INFO: azure-ip-masq-agent-mcg7w started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:58:41.374: INFO: metrics-server-58ff8c5ddf-h7jqs started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container metrics-server ready: true, restart count 0 Nov 14 04:58:41.374: INFO: webserver-deployment-595b5b9587-5hh96 started at 2019-11-14 04:56:59 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container httpd ready: false, restart count 0 Nov 14 04:58:41.374: INFO: local-client started at 2019-11-14 04:57:18 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container local-client ready: false, restart count 0 Nov 14 04:58:41.374: INFO: csi-mockplugin-resizer-0 started at 2019-11-14 04:58:00 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container csi-resizer ready: false, restart count 0 Nov 14 04:58:41.374: INFO: webserver-79f599c558-jt99l started at 2019-11-14 04:57:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container httpd ready: false, restart count 0 Nov 14 04:58:41.374: INFO: blobfuse-flexvol-installer-ktdjj started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 04:58:41.374: INFO: tiller-deploy-7559b6b885-vkxml started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container tiller ready: true, restart count 0 Nov 14 04:58:41.374: INFO: pod-subpath-test-emptydir-xppt started at 2019-11-14 04:57:24 +0000 UTC (2+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Init container init-volume-emptydir-xppt ready: false, restart count 0 Nov 14 04:58:41.374: INFO: Init container test-init-volume-emptydir-xppt ready: false, restart count 0 Nov 14 04:58:41.374: INFO: Container test-container-subpath-emptydir-xppt ready: false, restart count 0 Nov 14 04:58:41.374: INFO: hostexec-k8s-agentpool-23171212-vmss000001 started at 2019-11-14 04:55:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container agnhost ready: false, restart count 0 Nov 14 04:58:41.374: INFO: webserver-7bd9679d84-4b7f8 started at 2019-11-14 04:58:06 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container httpd ready: false, restart count 0 Nov 14 04:58:41.374: INFO: pod-configmaps-ffb86827-d2ac-4af7-9284-06e52002c841 started at 2019-11-14 04:55:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container configmap-volume-test ready: false, restart count 0 Nov 14 04:58:41.374: INFO: webserver-7c69b6748-75twh started at 2019-11-14 04:58:05 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container httpd ready: false, restart count 0 Nov 14 04:58:41.374: INFO: external-provisioner-7pj8z started at 2019-11-14 04:55:50 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container nfs-provisioner ready: false, restart count 0 Nov 14 04:58:41.374: INFO: webserver-deployment-595b5b9587-s5hd4 started at 2019-11-14 04:56:58 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container httpd ready: false, restart count 0 Nov 14 04:58:41.374: INFO: webserver-deployment-595b5b9587-nfvk4 started at 2019-11-14 04:57:01 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container httpd ready: false, restart count 0 Nov 14 04:58:41.374: INFO: external-provisioner-mc5xj started at 2019-11-14 04:58:39 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container nfs-provisioner ready: false, restart count 0 Nov 14 04:58:41.374: INFO: var-expansion-9ffd9011-059a-4181-a993-56638aeb87e4 started at 2019-11-14 04:58:37 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container dapi-container ready: false, restart count 0 Nov 14 04:58:41.374: INFO: keyvault-flexvolume-2g62m started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 04:58:41.374: INFO: redis-master-6ff87f4db7-lf6hr started at 2019-11-14 04:55:41 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container master ready: false, restart count 0 Nov 14 04:58:41.374: INFO: downwardapi-volume-02b05637-4cae-4f21-9317-3083e9c1a6af started at 2019-11-14 04:55:46 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container client-container ready: false, restart count 0 Nov 14 04:58:41.374: INFO: without-label started at 2019-11-14 04:56:13 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container without-label ready: false, restart count 0 Nov 14 04:58:41.374: INFO: webserver-deployment-595b5b9587-lrhtq started at 2019-11-14 04:56:56 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container httpd ready: false, restart count 0 Nov 14 04:58:41.374: INFO: csi-mockplugin-0 started at 2019-11-14 04:57:57 +0000 UTC (0+3 container statuses recorded) Nov 14 04:58:41.374: INFO: Container csi-provisioner ready: false, restart count 0 Nov 14 04:58:41.374: INFO: Container driver-registrar ready: false, restart count 0 Nov 14 04:58:41.374: INFO: Container mock ready: false, restart count 0 Nov 14 04:58:41.374: INFO: pod-submit-remove-950f11c5-b5a7-400b-800b-24c5377040ef started at 2019-11-14 04:56:05 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container nginx ready: false, restart count 0 Nov 14 04:58:41.374: INFO: sample-webhook-deployment-86d95b659d-jx6r9 started at 2019-11-14 04:55:48 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container sample-webhook ready: false, restart count 0 Nov 14 04:58:41.374: INFO: csi-mockplugin-attacher-0 started at 2019-11-14 04:57:59 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container csi-attacher ready: false, restart count 0 Nov 14 04:58:41.374: INFO: frontend-79ff456bff-s8p95 started at 2019-11-14 04:55:40 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container php-redis ready: false, restart count 0 Nov 14 04:58:41.374: INFO: webserver-7c69b6748-j7nw9 started at 2019-11-14 04:58:01 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container httpd ready: false, restart count 0 Nov 14 04:58:41.374: INFO: netserver-1 started at 2019-11-14 04:55:44 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container webserver ready: false, restart count 0 Nov 14 04:58:41.374: INFO: hostpath-symlink-prep-provisioning-7577 started at 2019-11-14 04:57:22 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.374: INFO: Container init-volume-provisioning-7577 ready: false, restart count 0 Nov 14 04:58:41.375: INFO: sample-webhook-deployment-86d95b659d-gpxsq started at 2019-11-14 04:55:56 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.375: INFO: Container sample-webhook ready: false, restart count 0 Nov 14 04:58:41.375: INFO: dns-test-f3d80d91-2590-4870-902c-cc6b474bdbf2 started at 2019-11-14 04:56:16 +0000 UTC (0+3 container statuses recorded) Nov 14 04:58:41.375: INFO: Container jessie-querier ready: false, restart count 0 Nov 14 04:58:41.375: INFO: Container querier ready: false, restart count 0 Nov 14 04:58:41.375: INFO: Container webserver ready: false, restart count 0 Nov 14 04:58:41.375: INFO: busybox-readonly-fs67cd44c6-d7ea-4df7-88d1-daea547ed81a started at 2019-11-14 04:57:39 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.375: INFO: Container busybox-readonly-fs67cd44c6-d7ea-4df7-88d1-daea547ed81a ready: false, restart count 0 Nov 14 04:58:41.375: INFO: ss2-0 started at 2019-11-14 04:56:54 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:41.375: INFO: Container webserver ready: false, restart count 0 W1114 04:58:41.427741 92619 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:58:43.688: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000001 Nov 14 04:58:43.688: INFO: Logging node info for node k8s-master-23171212-vmss000000 Nov 14 04:58:43.744: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000000 /api/v1/nodes/k8s-master-23171212-vmss000000 6c9bb7ee-6dcf-4c6d-a8ad-0377f76a60f6 32853 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:57:56 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.4,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:813714caae2d48f4a9036e17505029ae,SystemUUID:A7C76EFE-4E2A-8042-A754-6642A667D859,BootID:245ff6cc-bfb4-4487-ac55-fb3813c9167c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:58:43.744: INFO: Logging kubelet events for node k8s-master-23171212-vmss000000 Nov 14 04:58:43.809: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000000 Nov 14 04:58:43.887: INFO: azure-ip-masq-agent-q7rgb started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:43.887: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:58:43.887: INFO: kube-proxy-cpnbb started at 2019-11-14 04:40:28 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:43.887: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:58:43.887: INFO: kube-scheduler-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:43.887: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:58:43.887: INFO: cloud-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:43.887: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:58:43.887: INFO: kube-addon-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:43.887: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:58:43.887: INFO: kube-apiserver-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:43.887: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:58:43.887: INFO: kube-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:43.887: INFO: Container kube-controller-manager ready: true, restart count 0 W1114 04:58:43.940829 92619 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:58:44.072: INFO: Latency metrics for node k8s-master-23171212-vmss000000 Nov 14 04:58:44.072: INFO: Logging node info for node k8s-master-23171212-vmss000001 Nov 14 04:58:44.130: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000001 /api/v1/nodes/k8s-master-23171212-vmss000001 202620f8-2cc3-4eb6-b880-ef6d6d9fbccd 32916 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.5.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:57 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:57:57 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.5,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cafe5635afe4ac8baa078419003bc32,SystemUUID:88981890-9531-334C-9D46-A02D5E4BD18D,BootID:6accdcbe-b0af-4be0-8f82-19833a9a5e2e,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:58:44.130: INFO: Logging kubelet events for node k8s-master-23171212-vmss000001 Nov 14 04:58:44.186: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000001 Nov 14 04:58:44.263: INFO: azure-ip-masq-agent-dnl49 started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.263: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:58:44.263: INFO: kube-proxy-srv2s started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.263: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:58:44.263: INFO: kube-scheduler-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.263: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:58:44.263: INFO: cloud-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.264: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:58:44.264: INFO: kube-addon-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.264: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:58:44.264: INFO: kube-apiserver-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.264: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:58:44.264: INFO: kube-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.264: INFO: Container kube-controller-manager ready: true, restart count 0 W1114 04:58:44.322072 92619 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:58:44.452: INFO: Latency metrics for node k8s-master-23171212-vmss000001 Nov 14 04:58:44.452: INFO: Logging node info for node k8s-master-23171212-vmss000002 Nov 14 04:58:44.504: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000002 /api/v1/nodes/k8s-master-23171212-vmss000002 8eca3a9a-6fd5-4796-82bb-2f37c6fc30b7 33309 0 2019-11-14 04:41:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000002 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.6.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/2,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.6.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284883456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498451456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:41:18 +0000 UTC,LastTransitionTime:2019-11-14 04:41:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:58:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:58:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:58:26 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:58:26 +0000 UTC,LastTransitionTime:2019-11-14 04:41:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000002,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eb5abe50949445b79334d994c94314f8,SystemUUID:E11F8710-4785-DA42-B98E-8E97145F92C7,BootID:8fe9e9b2-2b16-4895-91c7-dc676b577942,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:58:44.505: INFO: Logging kubelet events for node k8s-master-23171212-vmss000002 Nov 14 04:58:44.560: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000002 Nov 14 04:58:44.633: INFO: azure-ip-masq-agent-mw27f started at 2019-11-14 04:41:05 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.633: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:58:44.633: INFO: kube-proxy-4vs6q started at 2019-11-14 04:41:06 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.634: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:58:44.634: INFO: kube-addon-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.634: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:58:44.634: INFO: kube-apiserver-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.634: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:58:44.634: INFO: kube-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.634: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:58:44.634: INFO: kube-scheduler-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.634: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:58:44.634: INFO: cloud-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:53 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.634: INFO: Container cloud-controller-manager ready: true, restart count 0 W1114 04:58:44.688297 92619 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:58:44.811: INFO: Latency metrics for node k8s-master-23171212-vmss000002 Nov 14 04:58:44.811: INFO: Logging node info for node k8s-master-23171212-vmss000003 Nov 14 04:58:44.863: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000003 /api/v1/nodes/k8s-master-23171212-vmss000003 b1a400e7-f6ff-4241-9175-cd8bd70dd11a 32862 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000003 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/3,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:57:56 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:57:56 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.7,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000003,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:effe7f682034467995d1db3ee85a4a38,SystemUUID:2073A143-352C-D241-B189-4A1DCC64C62C,BootID:6c95e89b-c056-494f-b817-6494fc9fd635,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:58:44.863: INFO: Logging kubelet events for node k8s-master-23171212-vmss000003 Nov 14 04:58:44.918: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000003 Nov 14 04:58:44.995: INFO: kube-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.995: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:58:44.995: INFO: kube-scheduler-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.995: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:58:44.995: INFO: azure-ip-masq-agent-4s5bk started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.995: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:58:44.995: INFO: kube-proxy-hrqtx started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.995: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:58:44.995: INFO: cloud-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.995: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:58:44.995: INFO: kube-addon-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.995: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 04:58:44.995: INFO: kube-apiserver-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:44.995: INFO: Container kube-apiserver ready: true, restart count 0 W1114 04:58:45.049369 92619 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:58:45.167: INFO: Latency metrics for node k8s-master-23171212-vmss000003 Nov 14 04:58:45.167: INFO: Logging node info for node k8s-master-23171212-vmss000004 Nov 14 04:58:45.219: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000004 /api/v1/nodes/k8s-master-23171212-vmss000004 25a9993c-54fa-45cc-9da7-66c66cafa30f 33009 0 2019-11-14 04:40:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000004 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/4,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 04:58:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 04:58:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 04:58:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 04:58:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.8,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000004,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ab6b205a70ea45b1b28b801e68a4ba84,SystemUUID:65406178-5013-644C-AD46-D7BC6F0DD7BF,BootID:e6b05928-9970-49a5-bd51-149982b32750,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 04:58:45.219: INFO: Logging kubelet events for node k8s-master-23171212-vmss000004 Nov 14 04:58:45.274: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000004 Nov 14 04:58:45.355: INFO: kube-apiserver-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:45.355: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 04:58:45.355: INFO: kube-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:45.355: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 04:58:45.355: INFO: azure-ip-masq-agent-47pzk started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:45.355: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 04:58:45.355: INFO: kube-proxy-47vmd started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:45.355: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 04:58:45.355: INFO: kube-scheduler-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:45.355: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 04:58:45.355: INFO: cloud-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:45.355: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 04:58:45.355: INFO: kube-addon-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 04:58:45.355: INFO: Container kube-addon-manager ready: true, restart count 0 W1114 04:58:45.408826 92619 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 04:58:45.527: INFO: Latency metrics for node k8s-master-23171212-vmss000004 Nov 14 04:58:45.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "provisioning-6474" for this suite. Nov 14 04:59:31.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 14 04:59:33.459: INFO: namespace provisioning-6474 deletion completed in 47.877923128s
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[sig\-storage\]\sCSI\sVolumes\s\[Driver\:\scsi\-hostpath\]\s\[Testpattern\:\sDynamic\sPV\s\(default\sfs\)\]\ssubPath\sshould\ssupport\sreadOnly\sdirectory\sspecified\sin\sthe\svolumeMount$'
test/e2e/storage/testsuites/subpath.go:347 Nov 14 05:19:19.433: Unexpected error: <*errors.errorString | 0xc0021711e0>: { s: "PersistentVolumeClaims [csi-hostpathndprd] not all in phase Bound within 5m0s", } PersistentVolumeClaims [csi-hostpathndprd] not all in phase Bound within 5m0s occurred test/e2e/storage/testsuites/base.go:366from junit_11.xml
[BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/storage/testsuites/base.go:93 [BeforeEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:151 �[1mSTEP�[0m: Creating a kubernetes client Nov 14 05:14:16.215: INFO: >>> kubeConfig: /workspace/aks287781815/kubeconfig/kubeconfig.westus2.json �[1mSTEP�[0m: Building a namespace api object, basename provisioning �[1mSTEP�[0m: Binding the e2e-test-privileged-psp PodSecurityPolicy to the default service account in provisioning-9472 �[1mSTEP�[0m: Waiting for a default service account to be provisioned in namespace [It] should support readOnly directory specified in the volumeMount test/e2e/storage/testsuites/subpath.go:347 �[1mSTEP�[0m: deploying csi-hostpath driver Nov 14 05:14:16.777: INFO: creating *v1.ServiceAccount: provisioning-9472/csi-attacher Nov 14 05:14:16.845: INFO: creating *v1.ClusterRole: external-attacher-runner-provisioning-9472 Nov 14 05:14:16.845: INFO: Define cluster role external-attacher-runner-provisioning-9472 Nov 14 05:14:16.906: INFO: creating *v1.ClusterRoleBinding: csi-attacher-role-provisioning-9472 Nov 14 05:14:16.961: INFO: creating *v1.Role: provisioning-9472/external-attacher-cfg-provisioning-9472 Nov 14 05:14:17.018: INFO: creating *v1.RoleBinding: provisioning-9472/csi-attacher-role-cfg Nov 14 05:14:17.081: INFO: creating *v1.ServiceAccount: provisioning-9472/csi-provisioner Nov 14 05:14:17.137: INFO: creating *v1.ClusterRole: external-provisioner-runner-provisioning-9472 Nov 14 05:14:17.137: INFO: Define cluster role external-provisioner-runner-provisioning-9472 Nov 14 05:14:17.200: INFO: creating *v1.ClusterRoleBinding: csi-provisioner-role-provisioning-9472 Nov 14 05:14:17.259: INFO: creating *v1.Role: provisioning-9472/external-provisioner-cfg-provisioning-9472 Nov 14 05:14:17.325: INFO: creating *v1.RoleBinding: provisioning-9472/csi-provisioner-role-cfg Nov 14 05:14:17.395: INFO: creating *v1.ServiceAccount: provisioning-9472/csi-snapshotter Nov 14 05:14:17.460: INFO: creating *v1.ClusterRole: external-snapshotter-runner-provisioning-9472 Nov 14 05:14:17.460: INFO: Define cluster role external-snapshotter-runner-provisioning-9472 Nov 14 05:14:17.529: INFO: creating *v1.ClusterRoleBinding: csi-snapshotter-role-provisioning-9472 Nov 14 05:14:17.590: INFO: creating *v1.Role: provisioning-9472/external-snapshotter-leaderelection-provisioning-9472 Nov 14 05:14:17.670: INFO: creating *v1.RoleBinding: provisioning-9472/external-snapshotter-leaderelection Nov 14 05:14:17.781: INFO: creating *v1.ServiceAccount: provisioning-9472/csi-resizer Nov 14 05:14:17.860: INFO: creating *v1.ClusterRole: external-resizer-runner-provisioning-9472 Nov 14 05:14:17.860: INFO: Define cluster role external-resizer-runner-provisioning-9472 Nov 14 05:14:17.933: INFO: creating *v1.ClusterRoleBinding: csi-resizer-role-provisioning-9472 Nov 14 05:14:17.989: INFO: creating *v1.Role: provisioning-9472/external-resizer-cfg-provisioning-9472 Nov 14 05:14:18.056: INFO: creating *v1.RoleBinding: provisioning-9472/csi-resizer-role-cfg Nov 14 05:14:18.167: INFO: creating *v1.Service: provisioning-9472/csi-hostpath-attacher Nov 14 05:14:18.266: INFO: creating *v1.StatefulSet: provisioning-9472/csi-hostpath-attacher Nov 14 05:14:18.331: INFO: creating *v1beta1.CSIDriver: csi-hostpath-provisioning-9472 Nov 14 05:14:18.399: INFO: creating *v1.Service: provisioning-9472/csi-hostpathplugin Nov 14 05:14:18.481: INFO: creating *v1.StatefulSet: provisioning-9472/csi-hostpathplugin Nov 14 05:14:18.544: INFO: creating *v1.Service: provisioning-9472/csi-hostpath-provisioner Nov 14 05:14:18.701: INFO: creating *v1.StatefulSet: provisioning-9472/csi-hostpath-provisioner Nov 14 05:14:18.759: INFO: creating *v1.Service: provisioning-9472/csi-hostpath-resizer Nov 14 05:14:18.854: INFO: creating *v1.StatefulSet: provisioning-9472/csi-hostpath-resizer Nov 14 05:14:18.915: INFO: creating *v1.Service: provisioning-9472/csi-snapshotter Nov 14 05:14:19.041: INFO: creating *v1.StatefulSet: provisioning-9472/csi-snapshotter Nov 14 05:14:19.110: INFO: creating *v1.ClusterRoleBinding: psp-csi-hostpath-role-provisioning-9472 Nov 14 05:14:19.205: INFO: Test running for native CSI Driver, not checking metrics Nov 14 05:14:19.205: INFO: Creating resource for dynamic PV �[1mSTEP�[0m: creating a StorageClass provisioning-9472-csi-hostpath-provisioning-9472-scccjmt �[1mSTEP�[0m: creating a claim Nov 14 05:14:19.270: INFO: Warning: Making PVC: VolumeMode specified as invalid empty string, treating as nil Nov 14 05:14:19.356: INFO: Waiting up to 5m0s for PersistentVolumeClaims [csi-hostpathndprd] to have phase Bound Nov 14 05:14:19.407: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:21.460: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:23.512: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:25.565: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:27.617: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:29.671: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:31.723: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:33.841: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:35.895: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:37.956: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:40.009: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:42.061: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:44.114: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:46.167: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:48.220: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:50.272: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:52.324: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:54.377: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:56.430: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:14:58.482: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:00.534: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:02.597: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:04.650: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:06.706: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:08.757: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:10.810: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:12.862: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:14.914: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:16.966: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:19.018: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:21.075: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:23.128: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:25.181: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:27.233: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:29.287: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:31.364: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:33.416: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:35.468: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:37.519: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:39.583: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:41.638: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:43.690: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:45.781: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:47.844: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:49.896: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:51.952: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:54.004: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:56.061: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:15:58.120: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:00.172: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:02.224: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:04.277: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:06.329: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:08.385: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:10.437: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:12.530: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:14.582: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:16.635: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:18.687: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:20.744: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:22.796: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:24.848: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:26.900: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:28.952: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:31.005: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:33.057: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:35.111: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:37.164: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:39.217: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:41.269: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:43.321: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:45.374: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:47.426: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:49.478: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:51.531: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:53.582: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:55.635: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:57.688: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:16:59.741: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:01.841: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:03.896: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:05.949: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:08.001: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:10.055: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:12.107: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:14.162: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:16.214: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:18.266: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:20.322: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:22.376: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:24.428: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:26.480: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:28.532: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:30.587: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:32.645: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:34.696: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:36.749: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:38.800: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:40.853: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:42.906: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:44.963: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:47.015: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:49.068: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:51.120: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:53.182: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:55.245: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:57.297: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:17:59.349: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:01.401: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:03.453: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:05.506: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:07.558: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:09.610: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:11.667: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:13.719: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:15.771: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:17.873: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:19.925: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:21.983: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:24.035: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:26.090: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:28.142: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:30.195: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:32.247: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:34.301: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:36.354: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:38.416: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:40.468: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:42.521: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:44.572: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:46.625: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:48.677: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:50.729: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:52.781: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:54.842: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:56.896: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:18:58.949: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:19:01.004: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:19:03.056: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:19:05.112: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:19:07.166: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:19:09.219: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:19:11.271: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:19:13.326: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:19:15.380: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:19:17.433: INFO: PersistentVolumeClaim csi-hostpathndprd found but phase is Pending instead of Bound. Nov 14 05:19:19.433: FAIL: Unexpected error: <*errors.errorString | 0xc0021711e0>: { s: "PersistentVolumeClaims [csi-hostpathndprd] not all in phase Bound within 5m0s", } PersistentVolumeClaims [csi-hostpathndprd] not all in phase Bound within 5m0s occurred [AfterEach] [Testpattern: Dynamic PV (default fs)] subPath test/e2e/framework/framework.go:152 �[1mSTEP�[0m: Collecting events from namespace "provisioning-9472". �[1mSTEP�[0m: Found 120 events. Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:18 +0000 UTC - event for csi-hostpath-attacher: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-attacher-0 in StatefulSet csi-hostpath-attacher successful Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:18 +0000 UTC - event for csi-hostpath-provisioner: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-provisioner-0 in StatefulSet csi-hostpath-provisioner successful Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:18 +0000 UTC - event for csi-hostpath-resizer: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpath-resizer-0 in StatefulSet csi-hostpath-resizer successful Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:18 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:19 +0000 UTC - event for csi-hostpathndprd: {persistentvolume-controller } ExternalProvisioning: waiting for a volume to be created, either by external provisioner "csi-hostpath-provisioning-9472" or manually created by system administrator Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:19 +0000 UTC - event for csi-snapshotter: {statefulset-controller } SuccessfulCreate: create Pod csi-snapshotter-0 in StatefulSet csi-snapshotter successful Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:21 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } RecreatingFailedPod: StatefulSet provisioning-9472/csi-hostpathplugin is recreating failed Pod csi-hostpathplugin-0 Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:21 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } SuccessfulDelete: delete Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin successful Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:21 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:29 +0000 UTC - event for csi-hostpathplugin: {statefulset-controller } FailedCreate: create Pod csi-hostpathplugin-0 in StatefulSet csi-hostpathplugin failed error: The POST operation against Pod could not be completed at this time, please try again. Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:29 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:34 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:42 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:44 +0000 UTC - event for csi-snapshotter-0: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1" Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:47 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:51 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:52 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "quay.io/k8scsi/csi-provisioner:v1.4.0-rc1" Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:53 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "quay.io/k8scsi/csi-attacher:v1.2.0" Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:55 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet k8s-agentpool-23171212-vmss000001} Pulling: Pulling image "quay.io/k8scsi/csi-resizer:v0.2.0" Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:56 +0000 UTC - event for csi-snapshotter-0: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1" Nov 14 05:19:19.535: INFO: At 2019-11-14 05:14:57 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:01 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "quay.io/k8scsi/csi-provisioner:v1.4.0-rc1" Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:03 +0000 UTC - event for csi-snapshotter-0: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container csi-snapshotter Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:04 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:05 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "quay.io/k8scsi/csi-attacher:v1.2.0" Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:07 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container csi-provisioner Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:07 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} FailedMount: MountVolume.SetUp failed for volume "default-token-zvw8k" : object "provisioning-9472"/"default-token-zvw8k" not registered Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:11 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet k8s-agentpool-23171212-vmss000001} Pulled: Successfully pulled image "quay.io/k8scsi/csi-resizer:v0.2.0" Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:11 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:13 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container csi-attacher Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:13 +0000 UTC - event for csi-snapshotter-0: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container csi-snapshotter Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:14 +0000 UTC - event for csi-hostpath-provisioner-0: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container csi-provisioner Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:18 +0000 UTC - event for csi-hostpath-attacher-0: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container csi-attacher Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:18 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet k8s-agentpool-23171212-vmss000001} Created: Created container csi-resizer Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:19 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:23 +0000 UTC - event for csi-hostpath-resizer-0: {kubelet k8s-agentpool-23171212-vmss000001} Started: Started container csi-resizer Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:23 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:28 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:34 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:38 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:40 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:44 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:49 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:54 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:15:56 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:01 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:02 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:02 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:04 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:08 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:11 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:16 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:21 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:27 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:29 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:35 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:39 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:45 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:48 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:53 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:55 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:56 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:16:58 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:17:01 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:17:04 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:17:08 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:17:08 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:17:13 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.535: INFO: At 2019-11-14 05:17:15 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:19 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:26 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:27 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:33 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:37 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:40 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:51 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:53 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:53 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:55 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:57 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:17:59 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:02 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:05 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:07 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:08 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:14 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:21 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:24 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:25 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:26 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:29 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:33 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:36 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:37 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:39 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:42 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:43 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:45 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:45 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:46 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:49 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:53 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:57 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:18:59 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:00 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:03 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:04 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:06 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:06 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:06 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:07 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:07 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:08 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:12 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:14 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:16 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:17 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.536: INFO: At 2019-11-14 05:19:18 +0000 UTC - event for csi-hostpathplugin-0: {kubelet k8s-agentpool-23171212-vmss000001} PodFitsHostPorts: Predicate PodFitsHostPorts failed Nov 14 05:19:19.593: INFO: POD NODE PHASE GRACE CONDITIONS Nov 14 05:19:19.594: INFO: csi-hostpath-attacher-0 k8s-agentpool-23171212-vmss000001 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:14:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:15:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:15:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:14:19 +0000 UTC }] Nov 14 05:19:19.594: INFO: csi-hostpath-provisioner-0 k8s-agentpool-23171212-vmss000001 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:14:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:15:15 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:15:15 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:14:21 +0000 UTC }] Nov 14 05:19:19.594: INFO: csi-hostpath-resizer-0 k8s-agentpool-23171212-vmss000001 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:14:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:15:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:15:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:14:22 +0000 UTC }] Nov 14 05:19:19.594: INFO: csi-hostpathplugin-0 k8s-agentpool-23171212-vmss000001 Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:19:19 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:19:19 +0000 UTC ContainersNotReady containers with unready status: [node-driver-registrar hostpath liveness-probe]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:19:19 +0000 UTC ContainersNotReady containers with unready status: [node-driver-registrar hostpath liveness-probe]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:19:19 +0000 UTC }] Nov 14 05:19:19.594: INFO: csi-snapshotter-0 k8s-agentpool-23171212-vmss000001 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:14:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:15:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:15:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-11-14 05:14:23 +0000 UTC }] Nov 14 05:19:19.594: INFO: Nov 14 05:19:19.709: INFO: Logging node info for node k8s-agentpool-23171212-vmss000000 Nov 14 05:19:19.761: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000000 /api/v1/nodes/k8s-agentpool-23171212-vmss000000 0f3bbebc-9d46-4ddd-a1dc-c93db8b52883 55620 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-5093":"k8s-agentpool-23171212-vmss000000","csi-hostpath-ephemeral-6209":"k8s-agentpool-23171212-vmss000000","csi-hostpath-ephemeral-7919":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-2202":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-5393":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-6454":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-6474":"k8s-agentpool-23171212-vmss000000","csi-hostpath-provisioning-8364":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-8403":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-1206":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-2585":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-5498":"k8s-agentpool-23171212-vmss000000","csi-hostpath-volume-expand-6633":"k8s-agentpool-23171212-vmss000000","csi-mock-csi-mock-volumes-4558":"csi-mock-csi-mock-volumes-4558","csi-mock-csi-mock-volumes-5498":"csi-mock-csi-mock-volumes-5498","csi-mock-csi-mock-volumes-6397":"csi-mock-csi-mock-volumes-6397","csi-mock-csi-mock-volumes-7486":"csi-mock-csi-mock-volumes-7486","csi-mock-csi-mock-volumes-7581":"csi-mock-csi-mock-volumes-7581","csi-mock-csi-mock-volumes-7883":"csi-mock-csi-mock-volumes-7883","csi-mock-csi-mock-volumes-8512":"csi-mock-csi-mock-volumes-8512","csi-mock-csi-mock-volumes-8729":"csi-mock-csi-mock-volumes-8729","csi-mock-csi-mock-volumes-9601":"csi-mock-csi-mock-volumes-9601"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.4.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.4.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:52 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:52 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:52 +0000 UTC,LastTransitionTime:2019-11-14 04:39:50 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 05:18:52 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.4,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:359d6aea81114a07a8070169aad06c4a,SystemUUID:A77EC1C1-102D-514B-A3FC-E5E916EF17BD,BootID:fc99ebb5-9bcd-41e5-aad2-849e47da2eea,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gluster/glusterdynamic-provisioner@sha256:90067cb05a7d217651e84576935934fd9dcff8e6bcdcbaa416bbf36fcd09dbd1 gluster/glusterdynamic-provisioner:v1.0],SizeBytes:373281573,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kubernetes-dashboard-amd64@sha256:0ae6b69432e78069c5ce2bcde0fe409c5c4d6f0f4d9cd50a17974fea38898747 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1],SizeBytes:121711221,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:e83beb5e43f8513fa735e77ffc5859640baea30a882a11cc75c4c3244a737d3c k8s.gcr.io/coredns:1.5.0],SizeBytes:42488424,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/authenticated-image-pulling/alpine@sha256:7ff177862cb50c602bfe81f805969412e619c054a2bbead977d0c276988aa4a0 gcr.io/authenticated-image-pulling/alpine:3.7],SizeBytes:4206620,},ContainerImage{Names:[alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 alpine:3.7],SizeBytes:4206494,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 busybox:1.27],SizeBytes:1129289,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:19:19.761: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000000 Nov 14 05:19:19.817: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000000 Nov 14 05:19:19.874: INFO: azure-ip-masq-agent-dgg69 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 05:19:19.874: INFO: security-context-17a29556-ee37-4a95-8287-90c32b9ca9bf started at 2019-11-14 05:18:26 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container write-pod ready: false, restart count 0 Nov 14 05:19:19.874: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 05:17:40 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container agnhost ready: true, restart count 0 Nov 14 05:19:19.874: INFO: kube-proxy-cdq9f started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 05:19:19.874: INFO: affinity-clusterip-btsn8 started at 2019-11-14 05:16:13 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container affinity-clusterip ready: false, restart count 0 Nov 14 05:19:19.874: INFO: local-injector started at 2019-11-14 05:18:25 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container local-injector ready: false, restart count 0 Nov 14 05:19:19.874: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 05:17:03 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container agnhost ready: true, restart count 0 Nov 14 05:19:19.874: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 05:16:15 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container agnhost ready: true, restart count 0 Nov 14 05:19:19.874: INFO: security-context-6aa3aa94-93ac-451e-8e69-d19bd23ac8bd started at 2019-11-14 05:19:01 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container write-pod ready: false, restart count 0 Nov 14 05:19:19.874: INFO: blobfuse-flexvol-installer-6xhz6 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 05:19:19.874: INFO: coredns-87f5d796-k7mr9 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container coredns ready: true, restart count 0 Nov 14 05:19:19.874: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 05:17:20 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container agnhost ready: true, restart count 0 Nov 14 05:19:19.874: INFO: hostexec-k8s-agentpool-23171212-vmss000000 started at 2019-11-14 05:16:14 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container agnhost ready: true, restart count 0 Nov 14 05:19:19.874: INFO: keyvault-flexvolume-ljqsq started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 05:19:19.874: INFO: kubernetes-dashboard-65966766b9-b8ps7 started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Container kubernetes-dashboard ready: true, restart count 0 Nov 14 05:19:19.874: INFO: pod-subpath-test-local-preprovisionedpv-62cl started at 2019-11-14 05:18:24 +0000 UTC (1+1 container statuses recorded) Nov 14 05:19:19.874: INFO: Init container init-volume-local-preprovisionedpv-62cl ready: true, restart count 0 Nov 14 05:19:19.874: INFO: Container test-container-subpath-local-preprovisionedpv-62cl ready: false, restart count 0 W1114 05:19:19.928999 92573 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:19:20.461: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000000 Nov 14 05:19:20.461: INFO: Logging node info for node k8s-agentpool-23171212-vmss000001 Nov 14 05:19:20.513: INFO: Node Info: &Node{ObjectMeta:{k8s-agentpool-23171212-vmss000001 /api/v1/nodes/k8s-agentpool-23171212-vmss000001 e9c1f552-b95b-4548-9ecd-37a7f1925e75 55963 0 2019-11-14 04:40:09 +0000 UTC <nil> <nil> map[agentpool:agentpool beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_D4s_v3 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:agent kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-agentpool-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:agent node-role.kubernetes.io/agent: storageprofile:managed storagetier:Premium_LRS] map[csi.volume.kubernetes.io/nodeid:{"csi-hostpath-ephemeral-6971":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3033":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-3310":"k8s-agentpool-23171212-vmss000001","csi-hostpath-provisioning-4400":"k8s-agentpool-23171212-vmss000001","csi-hostpath-volume-1175":"k8s-agentpool-23171212-vmss000001","csi-hostpath-volume-expand-2485":"k8s-agentpool-23171212-vmss000001","csi-hostpath-volume-expand-8426":"k8s-agentpool-23171212-vmss000001","csi-mock-csi-mock-volumes-3324":"csi-mock-csi-mock-volumes-3324","csi-mock-csi-mock-volumes-3770":"csi-mock-csi-mock-volumes-3770","csi-mock-csi-mock-volumes-5234":"csi-mock-csi-mock-volumes-5234","csi-mock-csi-mock-volumes-9859":"csi-mock-csi-mock-volumes-9859"} node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-agentpool-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.244.1.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16797569024 0} {<nil>} 16403876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {<nil>} 4 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{16011137024 0} {<nil>} 15635876Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 05:19:09 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 05:19:09 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 05:19:09 +0000 UTC,LastTransitionTime:2019-11-14 04:40:09 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 05:19:09 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.248.0.5,},NodeAddress{Type:Hostname,Address:k8s-agentpool-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:639707efd7a74ac4bca6a608e99a6715,SystemUUID:CACA620B-0C7C-7040-A716-91F766CA5A2F,BootID:9fabe02f-4e56-4162-b5c5-2e2733911b4f,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[quay.io/kubernetes_incubator/nfs-provisioner@sha256:df762117e3c891f2d2ddff46ecb0776ba1f9f3c44cfd7739b0683bcd7a7954a8 quay.io/kubernetes_incubator/nfs-provisioner:v2.2.2],SizeBytes:391772778,},ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/volume/nfs@sha256:c2ad734346f608a5f7d69cfded93c4e8094069320657bd372d12ba21dea3ea71 gcr.io/kubernetes-e2e-test-images/volume/nfs:1.0],SizeBytes:225358913,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[gcr.io/kubernetes-helm/tiller@sha256:f6d8f4ab9ba993b5f5b60a6edafe86352eabe474ffeb84cb6c79b8866dce45d1 gcr.io/kubernetes-helm/tiller:v2.11.0],SizeBytes:71821984,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[quay.io/k8scsi/csi-provisioner@sha256:0efcb424f1dde9b9fb11a1a14f2e48ab47e1c3f08bc3a929990dcfcb1f7ab34f quay.io/k8scsi/csi-provisioner:v1.4.0-rc1],SizeBytes:54431016,},ContainerImage{Names:[quay.io/k8scsi/csi-snapshotter@sha256:e3d3e742e32d00488fdb401045b9b1d033d7ca0ab6e760f77b24750fc95e5f70 quay.io/k8scsi/csi-snapshotter:v2.0.0-rc1],SizeBytes:51703561,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:26fccd7a99d973845df1193b46ebdcc6ab8dc5f6e6be319750c471fce1742d13 quay.io/k8scsi/csi-attacher:v1.2.0],SizeBytes:46226754,},ContainerImage{Names:[quay.io/k8scsi/csi-attacher@sha256:0aba670b4d9d6b2e720bbf575d733156c676b693ca26501235444490300db838 quay.io/k8scsi/csi-attacher:v1.1.0],SizeBytes:42839085,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:7d46fb6eb8b890dc546029d1565d502b4a1d974d33625c6ee2bc7991b77fc1a1 quay.io/k8scsi/csi-resizer:v0.2.0],SizeBytes:42817100,},ContainerImage{Names:[quay.io/k8scsi/csi-resizer@sha256:f315c9042e56def3c05c6b04fe79ec9da6d39ddc557ca365a76cf35964ea08b6 quay.io/k8scsi/csi-resizer:v0.1.0],SizeBytes:42623056,},ContainerImage{Names:[k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 k8s.gcr.io/metrics-server-amd64:v0.2.1],SizeBytes:42541759,},ContainerImage{Names:[gcr.io/google-containers/debian-base@sha256:6966a0aedd7592c18ff2dd803c08bd85780ee19f5e3a2e7cf908a4cd837afcde gcr.io/google-containers/debian-base:0.4.1],SizeBytes:42323657,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonroot@sha256:d4ede5c74517090b6686219059118ed178cf4620f5db8781b32f806bb1e7395b gcr.io/kubernetes-e2e-test-images/nonroot:1.0],SizeBytes:42321438,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[quay.io/k8scsi/hostpathplugin@sha256:b4826e492fc1762fceaf9726f41575ca0a4567864d3d235da874818de18039de quay.io/k8scsi/hostpathplugin:v1.2.0-rc5],SizeBytes:28761497,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume@sha256:4fd30d43947d4a54fc89ead7985beecfd3c9b2a93a0655a373b1608ab90bd5af mcr.microsoft.com/k8s/flexvolume/keyvault-flexvolume:v0.0.7],SizeBytes:22909487,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/echoserver@sha256:e9ba514b896cdf559eef8788b66c2c3ee55f3572df617647b4b0d8b6bf81cf19 gcr.io/kubernetes-e2e-test-images/echoserver:2.2],SizeBytes:21692741,},ContainerImage{Names:[quay.io/k8scsi/mock-driver@sha256:e0eed916b7d970bad2b7d9875f9ad16932f987f0f3d91ec5d86da68b0b5cc9d1 quay.io/k8scsi/mock-driver:v2.1.0],SizeBytes:16226335,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[quay.io/k8scsi/csi-node-driver-registrar@sha256:13daf82fb99e951a4bff8ae5fc7c17c3a8fe7130be6400990d8f6076c32d4599 quay.io/k8scsi/csi-node-driver-registrar:v1.1.0],SizeBytes:15815995,},ContainerImage{Names:[quay.io/k8scsi/livenessprobe@sha256:dde617756e0f602adc566ab71fd885f1dad451ad3fb063ac991c95a2ff47aea5 quay.io/k8scsi/livenessprobe:v1.1.0],SizeBytes:14967303,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:1303dbf110c57f3edf68d9f5a16c082ec06c4cf7604831669faf2c712260b5a0 busybox:latest],SizeBytes:1219790,},ContainerImage{Names:[mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume@sha256:23d8c6033f02a1ecad05127ebdc931bb871264228661bc122704b0974e4d9fdd mcr.microsoft.com/k8s/flexvolume/blobfuse-flexvolume:1.0.8],SizeBytes:1159025,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[busybox@sha256:bbc3a03235220b170ba48a157dd097dd1379299370e1ed99ce976df0355d24f0 busybox:1.27],SizeBytes:1129289,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:19:20.514: INFO: Logging kubelet events for node k8s-agentpool-23171212-vmss000001 Nov 14 05:19:20.568: INFO: Logging pods the kubelet thinks is on node k8s-agentpool-23171212-vmss000001 Nov 14 05:19:20.680: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 05:14:04 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container csi-provisioner ready: false, restart count 0 Nov 14 05:19:20.680: INFO: csi-hostpath-attacher-0 started at 2019-11-14 05:14:19 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container csi-attacher ready: true, restart count 0 Nov 14 05:19:20.680: INFO: tiller-deploy-7559b6b885-vkxml started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container tiller ready: true, restart count 0 Nov 14 05:19:20.680: INFO: csi-snapshotter-0 started at 2019-11-14 05:14:06 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container csi-snapshotter ready: false, restart count 0 Nov 14 05:19:20.680: INFO: csi-snapshotter-0 started at 2019-11-14 05:14:23 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container csi-snapshotter ready: true, restart count 0 Nov 14 05:19:20.680: INFO: csi-hostpath-resizer-0 started at 2019-11-14 05:14:22 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container csi-resizer ready: true, restart count 0 Nov 14 05:19:20.680: INFO: csi-hostpathplugin-0 started at 2019-11-14 05:17:31 +0000 UTC (0+3 container statuses recorded) Nov 14 05:19:20.680: INFO: Container hostpath ready: false, restart count 0 Nov 14 05:19:20.680: INFO: Container liveness-probe ready: false, restart count 0 Nov 14 05:19:20.680: INFO: Container node-driver-registrar ready: false, restart count 0 Nov 14 05:19:20.680: INFO: csi-hostpath-attacher-0 started at 2019-11-14 05:14:01 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container csi-attacher ready: false, restart count 0 Nov 14 05:19:20.680: INFO: azure-ip-masq-agent-mcg7w started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 05:19:20.680: INFO: metrics-server-58ff8c5ddf-h7jqs started at 2019-11-14 04:40:50 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container metrics-server ready: true, restart count 0 Nov 14 05:19:20.680: INFO: csi-hostpathplugin-0 started at 2019-11-14 05:19:19 +0000 UTC (0+3 container statuses recorded) Nov 14 05:19:20.680: INFO: Container hostpath ready: false, restart count 0 Nov 14 05:19:20.680: INFO: Container liveness-probe ready: false, restart count 0 Nov 14 05:19:20.680: INFO: Container node-driver-registrar ready: false, restart count 0 Nov 14 05:19:20.680: INFO: csi-hostpath-provisioner-0 started at 2019-11-14 05:14:21 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container csi-provisioner ready: true, restart count 0 Nov 14 05:19:20.680: INFO: blobfuse-flexvol-installer-ktdjj started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container blobfuse-flexvol-installer ready: true, restart count 0 Nov 14 05:19:20.680: INFO: keyvault-flexvolume-2g62m started at 2019-11-14 04:40:49 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container keyvault-flexvolume ready: true, restart count 0 Nov 14 05:19:20.680: INFO: csi-hostpath-resizer-0 started at 2019-11-14 05:14:05 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container csi-resizer ready: false, restart count 0 Nov 14 05:19:20.680: INFO: kube-proxy-ng7z8 started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:20.680: INFO: Container kube-proxy ready: true, restart count 0 W1114 05:19:20.739055 92573 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:19:21.136: INFO: Latency metrics for node k8s-agentpool-23171212-vmss000001 Nov 14 05:19:21.136: INFO: Logging node info for node k8s-master-23171212-vmss000000 Nov 14 05:19:21.190: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000000 /api/v1/nodes/k8s-master-23171212-vmss000000 6c9bb7ee-6dcf-4c6d-a8ad-0377f76a60f6 55744 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000000 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/0,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.2.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:59 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:59 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:59 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 05:18:59 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.4,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000000,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:813714caae2d48f4a9036e17505029ae,SystemUUID:A7C76EFE-4E2A-8042-A754-6642A667D859,BootID:245ff6cc-bfb4-4487-ac55-fb3813c9167c,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:19:21.191: INFO: Logging kubelet events for node k8s-master-23171212-vmss000000 Nov 14 05:19:21.255: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000000 Nov 14 05:19:21.332: INFO: kube-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.333: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 05:19:21.333: INFO: azure-ip-masq-agent-q7rgb started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.333: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 05:19:21.333: INFO: kube-proxy-cpnbb started at 2019-11-14 04:40:28 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.333: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 05:19:21.333: INFO: kube-scheduler-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.333: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 05:19:21.333: INFO: cloud-controller-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:51 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.333: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 05:19:21.333: INFO: kube-addon-manager-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.333: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 05:19:21.333: INFO: kube-apiserver-k8s-master-23171212-vmss000000 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.333: INFO: Container kube-apiserver ready: true, restart count 0 W1114 05:19:21.387222 92573 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:19:21.507: INFO: Latency metrics for node k8s-master-23171212-vmss000000 Nov 14 05:19:21.507: INFO: Logging node info for node k8s-master-23171212-vmss000001 Nov 14 05:19:21.560: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000001 /api/v1/nodes/k8s-master-23171212-vmss000001 202620f8-2cc3-4eb6-b880-ef6d6d9fbccd 55780 0 2019-11-14 04:40:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000001 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.5.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/1,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.5.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 05:19:01 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 05:19:01 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 05:19:01 +0000 UTC,LastTransitionTime:2019-11-14 04:39:58 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 05:19:01 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.5,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000001,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4cafe5635afe4ac8baa078419003bc32,SystemUUID:88981890-9531-334C-9D46-A02D5E4BD18D,BootID:6accdcbe-b0af-4be0-8f82-19833a9a5e2e,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:19:21.560: INFO: Logging kubelet events for node k8s-master-23171212-vmss000001 Nov 14 05:19:21.623: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000001 Nov 14 05:19:21.707: INFO: cloud-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.707: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 05:19:21.707: INFO: kube-addon-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.707: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 05:19:21.707: INFO: kube-apiserver-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.707: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 05:19:21.707: INFO: kube-controller-manager-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.707: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 05:19:21.707: INFO: azure-ip-masq-agent-dnl49 started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.707: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 05:19:21.707: INFO: kube-proxy-srv2s started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.707: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 05:19:21.707: INFO: kube-scheduler-k8s-master-23171212-vmss000001 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:21.707: INFO: Container kube-scheduler ready: true, restart count 0 W1114 05:19:21.811608 92573 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:19:21.931: INFO: Latency metrics for node k8s-master-23171212-vmss000001 Nov 14 05:19:21.931: INFO: Logging node info for node k8s-master-23171212-vmss000002 Nov 14 05:19:21.983: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000002 /api/v1/nodes/k8s-master-23171212-vmss000002 8eca3a9a-6fd5-4796-82bb-2f37c6fc30b7 55414 0 2019-11-14 04:41:04 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000002 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.6.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/2,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.6.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284883456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498451456 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:41:18 +0000 UTC,LastTransitionTime:2019-11-14 04:41:18 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:29 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:29 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:29 +0000 UTC,LastTransitionTime:2019-11-14 04:40:56 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 05:18:29 +0000 UTC,LastTransitionTime:2019-11-14 04:41:04 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.6,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000002,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:eb5abe50949445b79334d994c94314f8,SystemUUID:E11F8710-4785-DA42-B98E-8E97145F92C7,BootID:8fe9e9b2-2b16-4895-91c7-dc676b577942,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:19:21.983: INFO: Logging kubelet events for node k8s-master-23171212-vmss000002 Nov 14 05:19:22.041: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000002 Nov 14 05:19:22.121: INFO: kube-proxy-4vs6q started at 2019-11-14 04:41:06 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.121: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 05:19:22.121: INFO: kube-addon-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.121: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 05:19:22.121: INFO: kube-apiserver-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.121: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 05:19:22.121: INFO: kube-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.121: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 05:19:22.121: INFO: kube-scheduler-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.121: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 05:19:22.121: INFO: cloud-controller-manager-k8s-master-23171212-vmss000002 started at 2019-11-14 04:40:53 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.121: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 05:19:22.121: INFO: azure-ip-masq-agent-mw27f started at 2019-11-14 04:41:05 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.121: INFO: Container azure-ip-masq-agent ready: true, restart count 0 W1114 05:19:22.174688 92573 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:19:22.296: INFO: Latency metrics for node k8s-master-23171212-vmss000002 Nov 14 05:19:22.296: INFO: Logging node info for node k8s-master-23171212-vmss000003 Nov 14 05:19:22.348: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000003 /api/v1/nodes/k8s-master-23171212-vmss000003 b1a400e7-f6ff-4241-9175-cd8bd70dd11a 55747 0 2019-11-14 04:40:03 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-2 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000003 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.3.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/3,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.3.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:59 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:59 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 05:18:59 +0000 UTC,LastTransitionTime:2019-11-14 04:39:59 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 05:18:59 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.7,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000003,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:effe7f682034467995d1db3ee85a4a38,SystemUUID:2073A143-352C-D241-B189-4A1DCC64C62C,BootID:6c95e89b-c056-494f-b817-6494fc9fd635,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:19:22.348: INFO: Logging kubelet events for node k8s-master-23171212-vmss000003 Nov 14 05:19:22.404: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000003 Nov 14 05:19:22.480: INFO: kube-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.480: INFO: Container kube-controller-manager ready: true, restart count 0 Nov 14 05:19:22.480: INFO: kube-scheduler-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.480: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 05:19:22.480: INFO: azure-ip-masq-agent-4s5bk started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.480: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 05:19:22.480: INFO: kube-proxy-hrqtx started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.480: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 05:19:22.480: INFO: cloud-controller-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.480: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 05:19:22.480: INFO: kube-addon-manager-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.480: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 05:19:22.480: INFO: kube-apiserver-k8s-master-23171212-vmss000003 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.480: INFO: Container kube-apiserver ready: true, restart count 0 W1114 05:19:22.533646 92573 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:19:22.659: INFO: Latency metrics for node k8s-master-23171212-vmss000003 Nov 14 05:19:22.659: INFO: Logging node info for node k8s-master-23171212-vmss000004 Nov 14 05:19:22.717: INFO: Node Info: &Node{ObjectMeta:{k8s-master-23171212-vmss000004 /api/v1/nodes/k8s-master-23171212-vmss000004 25a9993c-54fa-45cc-9da7-66c66cafa30f 55826 0 2019-11-14 04:40:08 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:Standard_DS2_v2 beta.kubernetes.io/os:linux failure-domain.beta.kubernetes.io/region:westus2 failure-domain.beta.kubernetes.io/zone:westus2-1 kubernetes.azure.com/cluster:kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75 kubernetes.azure.com/role:master kubernetes.io/arch:amd64 kubernetes.io/hostname:k8s-master-23171212-vmss000004 kubernetes.io/os:linux kubernetes.io/role:master node-role.kubernetes.io/master:] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] []},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:azure:///subscriptions/0e46bd28-a80f-4d3a-8200-d9eb8d80cb2e/resourceGroups/kubetest-9c63b39e-0695-11ea-a4cc-c60aac250e75/providers/Microsoft.Compute/virtualMachineScaleSets/k8s-master-23171212-vmss/virtualMachines/4,Unschedulable:false,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:true,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.244.0.0/24],},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{31036776448 0} {<nil>} BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7284887552 0} {<nil>} 7114148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{27933098757 0} {<nil>} 27933098757 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{6498455552 0} {<nil>} 6346148Ki BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-11-14 04:40:48 +0000 UTC,LastTransitionTime:2019-11-14 04:40:48 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-11-14 05:19:04 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-11-14 05:19:04 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-11-14 05:19:04 +0000 UTC,LastTransitionTime:2019-11-14 04:40:05 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-11-14 05:19:04 +0000 UTC,LastTransitionTime:2019-11-14 04:40:22 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.240.0.8,},NodeAddress{Type:Hostname,Address:k8s-master-23171212-vmss000004,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ab6b205a70ea45b1b28b801e68a4ba84,SystemUUID:65406178-5013-644C-AD46-D7BC6F0DD7BF,BootID:e6b05928-9970-49a5-bd51-149982b32750,KernelVersion:4.15.0-1063-azure,OSImage:Ubuntu 16.04.6 LTS,ContainerRuntimeVersion:docker://3.0.6,KubeletVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,KubeProxyVersion:v1.16.4-beta.0.1+d70a3ca08fe72a,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8sprow.azurecr.io/hyperkube-amd64@sha256:4c04f9ab0fa34bcbcb8ebfbced912f9b998c5d9c090fafdca92911d124fa339b k8sprow.azurecr.io/hyperkube-amd64:azure-e2e-1194831241233305600-197629b6],SizeBytes:604811790,},ContainerImage{Names:[k8sprow.azurecr.io/azure-cloud-controller-manager@sha256:6fcb752760f3412a2cb10bce535ba4dfa8267081345fa1b5cbc7bb5126ce3437 k8sprow.azurecr.io/azure-cloud-controller-manager:1194831241233305600-d3e4a1cf],SizeBytes:92595467,},ContainerImage{Names:[k8s.gcr.io/kube-addon-manager-amd64@sha256:382c220b3531d9f95bf316a16b7282cc2ef929cd8a89a9dd3f5933edafc41a8e k8s.gcr.io/kube-addon-manager-amd64:v9.0.1],SizeBytes:83076194,},ContainerImage{Names:[k8s.gcr.io/ip-masq-agent-amd64@sha256:269e0fb9d53fd37f7a135d6a55ea265a67279ba218aa148323f015cf70167340 k8s.gcr.io/ip-masq-agent-amd64:v2.3.0],SizeBytes:50144412,},ContainerImage{Names:[k8s.gcr.io/pause-amd64@sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610 k8s.gcr.io/pause-amd64:3.1],SizeBytes:742472,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 14 05:19:22.717: INFO: Logging kubelet events for node k8s-master-23171212-vmss000004 Nov 14 05:19:22.773: INFO: Logging pods the kubelet thinks is on node k8s-master-23171212-vmss000004 Nov 14 05:19:22.850: INFO: azure-ip-masq-agent-47pzk started at 2019-11-14 04:40:26 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.850: INFO: Container azure-ip-masq-agent ready: true, restart count 0 Nov 14 05:19:22.850: INFO: kube-proxy-47vmd started at 2019-11-14 04:40:27 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.850: INFO: Container kube-proxy ready: true, restart count 0 Nov 14 05:19:22.850: INFO: kube-scheduler-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.850: INFO: Container kube-scheduler ready: true, restart count 0 Nov 14 05:19:22.850: INFO: cloud-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.850: INFO: Container cloud-controller-manager ready: true, restart count 0 Nov 14 05:19:22.850: INFO: kube-addon-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.850: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 14 05:19:22.850: INFO: kube-apiserver-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.850: INFO: Container kube-apiserver ready: true, restart count 0 Nov 14 05:19:22.850: INFO: kube-controller-manager-k8s-master-23171212-vmss000004 started at 2019-11-14 04:39:52 +0000 UTC (0+1 container statuses recorded) Nov 14 05:19:22.850: INFO: Container kube-controller-manager ready: true, restart count 0 W1114 05:19:22.911451 92573 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 14 05:19:23.043: INFO: Latency metrics for node k8s-master-23171212-vmss000004 Nov 14 05:19:23.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready �[1mSTEP�[0m: Destroying namespace "provisioning-9472" for this suite. WARNING: pod log: csi-hostpath-provisioner-0/csi-provisioner: pods "csi-hostpath-provisioner-0" not found Nov 14 05:21:31.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 14 05:21:32.989: INFO: namespace provisioning-9472 deletion completed in 2m9.892915909s
Filter through log files | View test history on testgrid
error during ./hack/ginkgo-e2e.sh --ginkgo.flakeAttempts=2 --num-nodes=2 --ginkgo.skip=\[Slow\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\]|Network\sshould\sset\sTCP\sCLOSE_WAIT\stimeout|Mount\spropagation\sshould\spropagate\smounts\sto\sthe\shost|PodSecurityPolicy|PVC\sProtection\sVerify|should\sprovide\sbasic\sidentity|should\sadopt\smatching\sorphans\sand\srelease|should\snot\sdeadlock\swhen\sa\spod's\spredecessor\sfails|should\sperform\srolling\supdates\sand\sroll\sbacks\sof\stemplate\smodifications\swith\sPVCs|should\sperform\srolling\supdates\sand\sroll\sbacks\sof\stemplate\smodifications|Services\sshould\sbe\sable\sto\screate\sa\sfunctioning\sNodePort\sservice$|volumeMode\sshould\snot\smount\s/\smap\sunused\svolumes\sin\sa\spod --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
Build
Check APIReachability
Deferred TearDown
DumpClusterLogs
IsUp
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull from private registry with secret [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from docker hub [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should be able to pull image from gcr.io [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull from private registry without secret [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull image from invalid registry [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when running a container with a new image should not be able to pull non-existing image from gcr.io [NodeConformance]
Kubernetes e2e suite [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
Kubernetes e2e suite [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Lease API should be available
Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should create and update a lease in the kube-node-lease namespace
Kubernetes e2e suite [k8s.io] NodeLease when the NodeLease feature is enabled the kubelet should report node status infrequently
Kubernetes e2e suite [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should be updated [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should support pod readiness gates [NodeFeature:PodReadinessGate]
Kubernetes e2e suite [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] PrivilegedPod [NodeConformance] should enable privileged commands [LinuxOnly]
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a non-local redirect http liveness probe
Kubernetes e2e suite [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance]
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a local redirect http liveness probe
Kubernetes e2e suite [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run with an explicit root user ID [LinuxOnly]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should not run without a specified user ID
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an explicit non-root user ID [LinuxOnly]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsNonRoot should run with an image specified user ID
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with readonly rootfs when readOnlyRootFilesystem=true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when not explicitly set and uid != 0 [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should allow privilege escalation when true [LinuxOnly] [NodeConformance]
Kubernetes e2e suite [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should not launch unsafe, but not explicitly enabled sysctls on the node
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should reject invalid sysctls
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support sysctls
Kubernetes e2e suite [k8s.io] Sysctls [LinuxOnly] [NodeFeature:Sysctls] should support unsafe sysctls which are actually whitelisted
Kubernetes e2e suite [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]
Kubernetes e2e suite [k8s.io] Variable Expansion should allow substituting values in a volume subpath [sig-storage][NodeFeature:VolumeSubpathEnvExpansion]
Kubernetes e2e suite [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]
Kubernetes e2e suite [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
Kubernetes e2e suite [k8s.io] [sig-node] PreStop graceful pod terminated should wait until preStop hook completes the process
Kubernetes e2e suite [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser And container.SecurityContext.RunAsGroup [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support container.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser And pod.Spec.SecurityContext.RunAsGroup [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.RunAsUser [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support pod.Spec.SecurityContext.SupplementalGroups [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] Clean up pods on node kubelet should be able to delete 10 pods per node in 1m0s.
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]
Kubernetes e2e suite [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]
Kubernetes e2e suite [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]
Kubernetes e2e suite [sig-api-machinery] Discovery Custom resource should have storage version hash
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete jobs and pods created by cronjob
Kubernetes e2e suite [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]
Kubernetes e2e suite [sig-api-machinery] Garbage collector should orphan pods created by rc if deleteOptions.OrphanDependents is nil
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support cascading deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Garbage collector should support orphan deletion of custom resources
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create pods, set the deletionTimestamp and deletionGracePeriodSeconds of the pod
Kubernetes e2e suite [sig-api-machinery] Generated clientset should create v1beta1 cronJobs, delete cronJobs, watch cronJobs
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a custom resource.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim with a storage class. [sig-storage]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a persistent volume claim. [sig-storage]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]
Kubernetes e2e suite [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should return chunks of results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return chunks of table results for list calls
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return generic metadata details across all namespaces for nodes
Kubernetes e2e suite [sig-api-machinery] Servers with support for Table transformation should return pod details
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
Kubernetes e2e suite [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]
Kubernetes e2e suite [sig-apps] CronJob should delete successful/failed finished jobs with limit of one job
Kubernetes e2e suite [sig-apps] CronJob should not emit unexpected warnings
Kubernetes e2e suite [sig-apps] CronJob should remove from active list jobs that have been deleted
Kubernetes e2e suite [sig-apps] CronJob should replace jobs when ReplaceConcurrent
Kubernetes e2e suite [sig-apps] CronJob should schedule multiple jobs concurrently
Kubernetes e2e suite [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment reaping should cascade to its replica sets and pods
Kubernetes e2e suite [sig-apps] Deployment deployment should delete old replica sets [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support proportional scaling [Conformance]
Kubernetes e2e suite [sig-apps] Deployment deployment should support rollover [Conformance]
Kubernetes e2e suite [sig-apps] Deployment iterative rollouts should eventually progress
Kubernetes e2e suite [sig-apps] Deployment test Deployment ReplicaSet orphaning and adoption regarding controllerRef
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, absolute => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: enough pods, replicaSet, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable allow single eviction, percentage => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: maxUnavailable deny evictions, integer => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: no PDB => should allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, absolute => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController evictions: too few pods, replicaSet, percentage => should not allow an eviction
Kubernetes e2e suite [sig-apps] DisruptionController should block an eviction until the PDB is updated to allow it
Kubernetes e2e suite [sig-apps] DisruptionController should create a PodDisruptionBudget
Kubernetes e2e suite [sig-apps] DisruptionController should update PodDisruptionBudget status
Kubernetes e2e suite [sig-apps] Job should delete a job [Conformance]
Kubernetes e2e suite [sig-apps] Job should fail to exceed backoffLimit
Kubernetes e2e suite [sig-apps] Job should fail when exceeds active deadline
Kubernetes e2e suite [sig-apps] Job should remove pods when job is deleted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks sometimes fail and are not locally restarted
Kubernetes e2e suite [sig-apps] Job should run a job to completion when tasks succeed
Kubernetes e2e suite [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicaSet should surface a failure condition on a common issue like exceeded quota
Kubernetes e2e suite [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should release no longer matching pods [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]
Kubernetes e2e suite [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should implement legacy replacement when the update strategy is OnDelete
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]
Kubernetes e2e suite [sig-auth] Certificates API should support building a client with a CSR
Kubernetes e2e suite [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]
Kubernetes e2e suite [sig-auth] ServiceAccounts should ensure a single API token exists
Kubernetes e2e suite [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on 0.0.0.0 that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost should support forwarding over websockets
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects NO client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl Port forwarding With a server listening on localhost that expects a client request should support a client that connects, sends NO DATA, and disconnects
Kubernetes e2e suite [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply apply set/view last-applied
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should apply a new configuration to an existing RC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl apply should reuse port when apply to an existing SVC
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a CR with unknown fields for CRD with no validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR for CRD with validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl client-side validation should create/apply a valid CR with arbitrary-extra properties for CRD with partially-specified validation schema
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info dump should check if cluster-info dump succeeds
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl copy should copy a file from a running Pod
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota with scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should create a quota without scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl create quota should reject quota with invalid scopes
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl get componentstatuses should get componentstatuses
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run CronJob should create a CronJob
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should contain last line of the log
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should handle in-cluster config
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should return command exit codes
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through an HTTP proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec through kubectl proxy
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support exec using resource/name
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support inline execution and attach
Kubernetes e2e suite [sig-cli] Kubectl client Simple pod should support port-forward
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]
Kubernetes e2e suite [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]
Kubernetes e2e suite [sig-instrumentation] Cadvisor should be healthy on every node.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from API server.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a ControllerManager.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Kubelet.
Kubernetes e2e suite [sig-instrumentation] MetricsGrabber should grab all metrics from a Scheduler.
Kubernetes e2e suite [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for ExternalName services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for services [Conformance]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Conformance]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly]
Kubernetes e2e suite [sig-network] DNS should resolve DNS of partial qualified names for the cluster [LinuxOnly]
Kubernetes e2e suite [sig-network] DNS should support configurable pod DNS nameservers
Kubernetes e2e suite [sig-network] DNS should support configurable pod resolv.conf
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for client IP based session affinity: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for endpoint-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for node-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should function for pod-Service: udp
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: http
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update endpoints: udp
Kubernetes e2e suite [sig-network] Networking should check kube-proxy urls
Kubernetes e2e suite [sig-network] Networking should provide unchanging, static URL paths for kubernetes api services
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]
Kubernetes e2e suite [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]
Kubernetes e2e suite [sig-network] Service endpoints latency should not be very high [Conformance]
Kubernetes e2e suite [sig-network] Services should allow pods to hairpin back to themselves through services
Kubernetes e2e suite [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to create a functioning NodePort service [Conformance]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for NodePort service
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for service with type clusterIP
Kubernetes e2e suite [sig-network] Services should be able to update NodePorts with two same port numbers but different protocols
Kubernetes e2e suite [sig-network] Services should be rejected when no endpoints exist
Kubernetes e2e suite [sig-network] Services should check NodePort out-of-range
Kubernetes e2e suite [sig-network] Services should create endpoints for unready pods
Kubernetes e2e suite [sig-network] Services should have session affinity work for NodePort service
Kubernetes e2e suite [sig-network] Services should have session affinity work for service with type clusterIP
Kubernetes e2e suite [sig-network] Services should preserve source pod IP for traffic thru service cluster IP [LinuxOnly]
Kubernetes e2e suite [sig-network] Services should prevent NodePort collisions
Kubernetes e2e suite [sig-network] Services should provide secure master service [Conformance]
Kubernetes e2e suite [sig-network] Services should release NodePorts on delete
Kubernetes e2e suite [sig-network] Services should serve a basic endpoint from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should serve multiport endpoints from pods [Conformance]
Kubernetes e2e suite [sig-network] Services should use same NodePort with same port but different protocols
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]
Kubernetes e2e suite [sig-node] ConfigMap should patch ConfigMap successfully
Kubernetes e2e suite [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide host IP and pod IP as an env var if pod uses host network [LinuxOnly]
Kubernetes e2e suite [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with an unconfigured handler
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a RuntimeClass with conflicting node selector
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a deleted RuntimeClass
Kubernetes e2e suite [sig-node] RuntimeClass should reject a Pod requesting a non-existent RuntimeClass
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with a configured handler [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-node] RuntimeClass should run a Pod requesting a RuntimeClass with scheduling [NodeFeature:RuntimeHandler]
Kubernetes e2e suite [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: inline ephemeral CSI volume] ephemeral should support two pods which share the same volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume by restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should expand volume without restarting pod if nodeExpansion=off
Kubernetes e2e suite [sig-storage] CSI mock volume CSI Volume expansion should not expand volume if resizingOnDriver=off, resizingOnSC=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should not require VolumeAttach for drivers without attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should preserve attachment policy when no CSIDriver present
Kubernetes e2e suite [sig-storage] CSI mock volume CSI attach test using mock driver should require VolumeAttach for drivers with attachment
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=off, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI online volume expansion should expand volume without restarting pod if attach=on, nodeExpansion=on
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver contain ephemeral=true when using inline volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should be passed when podInfoOnMount=true
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when CSIDriver does not exist
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=false
Kubernetes e2e suite [sig-storage] CSI mock volume CSI workload information using mock driver should not be passed when podInfoOnMount=nil
Kubernetes e2e suite [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner allowedTopologies should create persistent volume in the zone specified in allowedTopologies of storageclass
Kubernetes e2e suite [sig-storage] Dynamic Provisioning [k8s.io] GlusterDynamicProvisioner should create and delete persistent volumes [fast]
Kubernetes e2e suite [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] files with FSGroup ownership should support (root,0644,tmpfs)
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is non-root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] new files should be created with FSGroup ownership when container is root
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] nonexistent volume subPath should have the correct mode and owner using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on default medium should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir volumes when FSGroup is specified [LinuxOnly] [NodeFeature:FSGroup] volume on tmpfs should have the correct mode using FSGroup
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : configmap
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : projected
Kubernetes e2e suite [sig-storage] Ephemeralstorage When pod refers to non-existent ephemeral storage should allow deletion of pod with invalid volume : secret
Kubernetes e2e suite [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] HostPath should support r/w [NodeConformance]
Kubernetes e2e suite [sig-storage] HostPath should support subPath [NodeConformance]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: emptydir] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPathSymlink] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: hostPath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: block] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: blockfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link-bindmounted] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir-link] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: dir] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: local][LocalVolumeType: tmpfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: nfs] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] PV Protection Verify "immediate" deletion of a PV that is not bound to a PVC
Kubernetes e2e suite [sig-storage] PV Protection Verify that PV bound to a PVC is not removed immediately
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS when invoking the Recycle reclaim policy should test that a PV becomes Available and is clean after the PVC is deleted.
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PV and a pre-bound PVC: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and a pre-bound PV: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs create a PVC and non-pre-bound PV: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with Single PV - PVC pairs should create a non-pre-bound PV and PVC: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 2 PVs and 4 PVCs: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes NFS with multiple PVs and PVCs all in same ns should create 3 PVs and 3 PVCs: test write access
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeAffinity
Kubernetes e2e suite [sig-storage] PersistentVolumes-local Pod with node different from PV's NodeAffinity should fail scheduling due to different NodeSelector
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: block] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: blockfswithoutformat] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link-bindmounted] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir-link] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: dir] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and read from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] One pod requesting one prebound PVC should be able to mount volume and write from pod1
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume at the same time should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] PersistentVolumes-local [Volume type: tmpfs] Two pods mounting a local volume one after the other should be able to write from pod1 and read from pod2
Kubernetes e2e suite [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root with FSGroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname as non-root with fsgroup and defaultMode [LinuxOnly] [NodeFeature:FSGroup]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]
Kubernetes e2e suite [sig-storage] Volumes ConfigMap should be mountable
TearDown
TearDown Previous
Timeout
Up
kubectl version
list nodes
test setup
Kubernetes e2e suite Recreate [Feature:Recreate] recreate nodes and ensure they function upon restart
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] CA ignores unschedulable pods while scheduling schedulable pods [Feature:ClusterAutoscalerScalability6]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale down empty nodes [Feature:ClusterAutoscalerScalability3]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale down underutilized nodes [Feature:ClusterAutoscalerScalability4]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale up at all [Feature:ClusterAutoscalerScalability1]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] should scale up twice [Feature:ClusterAutoscalerScalability2]
Kubernetes e2e suite [k8s.io] Cluster size autoscaler scalability [Slow] shouldn't scale down with underutilized nodes due to host port conflicts [Feature:ClusterAutoscalerScalability5]
Kubernetes e2e suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide container's limits.ephemeral-storage and requests.ephemeral-storage as env vars
Kubernetes e2e suite [k8s.io] Downward API [Serial] [Disruptive] [NodeFeature:EphemeralStorage] Downward API tests for local ephemeral storage should provide default limits.ephemeral-storage from node allocatable
Kubernetes e2e suite [k8s.io] GKE local SSD [Feature:GKELocalSSD] should write and read from node local SSD [Feature:GKELocalSSD]
Kubernetes e2e suite [k8s.io] GKE node pools [Feature:GKENodePool] should create a cluster with multiple node pools [Feature:GKENodePool]
Kubernetes e2e suite [k8s.io] Pods should cap back-off at MaxContainerBackOff [Slow][NodeConformance]
Kubernetes e2e suite [k8s.io] Pods should have their auto-restart back-off timer reset on image update [Slow][NodeConformance]
Kubernetes e2e suite [k8s.io] Probing container should be restarted with a docker exec liveness probe with timeout
Kubernetes e2e suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with absolute path [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should fail substituting values in a volume subpath with backticks [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should not change the subpath mount on a container restart if the environment variable changes [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should succeed in writing subpaths in container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
Kubernetes e2e suite [k8s.io] Variable Expansion should verify that a failing subpath expansion can be modified during the lifecycle of a container [sig-storage][NodeFeature:VolumeSubpathEnvExpansion][Slow]
Kubernetes e2e suite [k8s.io] [Feature:Example] [k8s.io] Downward API should create a pod that prints his name and namespace
Kubernetes e2e suite [k8s.io] [Feature:Example] [k8s.io] Liveness liveness pods should be automatically restarted
Kubernetes e2e suite [k8s.io] [Feature:Example] [k8s.io] Secret should create a pod that reads a secret
Kubernetes e2e suite [k8s.io] [Feature:TTLAfterFinished][NodeAlphaFeature:TTLAfterFinished] job should be deleted once it finishes after TTL seconds
Kubernetes e2e suite [k8s.io] [sig-cloud-provider] [Feature:CloudProvider][Disruptive] Nodes should be deleted on API server if it doesn't exist in the cloud provider
Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles can disable an AppArmor profile, using unconfined
Kubernetes e2e suite [k8s.io] [sig-node] AppArmor load AppArmor profiles should enforce an AppArmor profile
Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] experimental resource usage tracking [Feature:ExperimentalResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 0 pods per node
Kubernetes e2e suite [k8s.io] [sig-node] Kubelet [Serial] [Slow] [k8s.io] [sig-node] regular resource usage tracking [Feature:RegularResourceUsageTracking] resource tracking for 100 pods per node
Kubernetes e2e suite [k8s.io] [sig-node] Mount propagation should propagate mounts to the host
Kubernetes e2e suite [k8s.io] [sig-node] NodeProblemDetector [DisabledForLargeClusters] should run without error
Kubernetes e2e suite [k8s.io] [sig-node] Pod garbage collector [Feature:PodGarbageCollector] [Slow] should handle the creation of 1000 pods
Kubernetes e2e suite [k8s.io] [sig-node] SSH should SSH to all nodes and run commands
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp alpha runtime/default annotation [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp alpha unconfined annotation on the container [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp alpha unconfined annotation on the pod [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support seccomp default which is unconfined [Feature:Seccomp] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support volume SELinux relabeling [Flaky] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support volume SELinux relabeling when using hostIPC [Flaky] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] Security Context should support volume SELinux relabeling when using hostPID [Flaky] [LinuxOnly]
Kubernetes e2e suite [k8s.io] [sig-node] crictl should be able to run crictl on the node
Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] host cleanup with volume mounts [sig-storage][HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (active) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [k8s.io] [sig-node] kubelet [k8s.io] [sig-node] host cleanup with volume mounts [sig-storage][HostCleanup][Flaky] Host cleanup after disrupting NFS volume [NFS] after stopping the nfs-server and deleting the (sleeping) client pod, the NFS mount and the pod's UID directory should be removed.
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from SIGKILL
Kubernetes e2e suite [sig-api-machinery] Etcd failure [Disruptive] should recover from network partition with master
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should always delete fast (ALL of 100 namespaces in 150 seconds) [Feature:ComprehensiveNamespaceDraining]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should delete fast enough (90 percent of 100 namespaces in 150 seconds)
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's multiple priority class scope (quota set to pod count: 2) against 2 pods with same priority classes.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (cpu, memory quota set) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with different priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against 2 pods with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpExists).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with different priority class (ScopeSelectorOpNotIn).
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:PodPriority] should verify ResourceQuota's priority class scope (quota set to pod count: 1) against a pod with same priority class.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with best effort scope using scope-selectors.
Kubernetes e2e suite [sig-api-machinery] ResourceQuota [Feature:ScopeSelectors] should verify ResourceQuota with terminating scopes through scope selectors.
Kubernetes e2e suite [sig-api-machinery] Servers with support for API chunking should support continue listing from the last key if the original version has been compacted away, though the list is inconsistent [Slow]
Kubernetes e2e suite [sig-apps] CronJob should not schedule jobs when suspended [Slow]
Kubernetes e2e suite [sig-apps] CronJob should not schedule new jobs when ForbidConcurrent [Slow]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should not update pod when spec was updated and update strategy is OnDelete
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop complex daemon with node affinity
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]
Kubernetes e2e suite [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Controller Manager should not create/delete replicas across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Kubelet should not restart containers across restart
Kubernetes e2e suite [sig-apps] DaemonRestart [Disruptive] Scheduler should continue assigning pods to nodes across restart
Kubernetes e2e suite [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] Pods should be evicted from unready Node [Feature:TaintEviction] All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be evicted after eviction timeout passes
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] Pods should return to running and ready state after network partition is healed All pods on the unreachable node should be marked as NotReady upon the node turn NotReady AND all pods should be mark back to Ready when the node get back to Ready before pod eviction timeout
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [Job] should create new pods when node is partitioned
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should eagerly create replacement pod during network partition when termination grace is non-zero
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [ReplicationController] should recreate pods scheduled on the unreachable node AND allow scheduling of pods on a node after it rejoins the cluster
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should come back up if node goes down [Slow] [Disruptive]
Kubernetes e2e suite [sig-apps] Network Partition [Disruptive] [Slow] [k8s.io] [StatefulSet] should not reschedule stateful pods if there is a network partition [Slow] [Disruptive]
Kubernetes e2e suite [sig-apps] ReplicaSet should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] ReplicationController should serve a basic image on each replica with a private image
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should adopt matching orphans and release non-matching pods
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should not deadlock when a pod's predecessor fails
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications with PVCs
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should provide basic identity
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working CockroachDB cluster
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working mysql cluster
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working redis cluster
Kubernetes e2e suite [sig-apps] StatefulSet [k8s.io] Deploy clustered applications [Feature:StatefulSet] [Slow] should creating a working zookeeper cluster
Kubernetes e2e suite [sig-apps] stateful Upgrade [Feature:StatefulUpgrade] [k8s.io] stateful upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create and delete custom resource definition.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch configmaps.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch deployments.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch pods.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to create, get, update, patch, delete, list, watch secrets.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should audit API calls to get a pod with unauthorized user.
Kubernetes e2e suite [sig-auth] Advanced Audit [DisabledForLargeClusters][Flaky] should list pods as impersonated user.
Kubernetes e2e suite [sig-auth] Metadata Concealment should run a check-metadata-concealment job to completion
Kubernetes e2e suite [sig-auth] PodSecurityPolicy should allow pods under the privileged policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy should enforce the restricted policy.PodSecurityPolicy
Kubernetes e2e suite [sig-auth] PodSecurityPolicy should forbid pod creation when no PSP is available
Kubernetes e2e suite [sig-auth] ServiceAccounts should support InClusterConfig with token rotation [Slow] [Feature:TokenRequestProjection]
Kubernetes e2e suite [sig-auth] [Feature:DynamicAudit] should dynamically audit API calls
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet can delegate ServiceAccount tokens to the API server
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthenticator] The kubelet's main port 10250 should reject requests with no credentials
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to create another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] A node shouldn't be able to delete another node
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent configmap should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a non-existent secret should exit with the Forbidden error, not a NotFound error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting a secret for a workload the node has access to should succeed
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing configmap should exit with the Forbidden error
Kubernetes e2e suite [sig-auth] [Feature:NodeAuthorizer] Getting an existing secret should exit with the Forbidden error
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group down to 0[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should be able to scale a node group up from 0[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should not scale GPU pool up if pod does not require GPUs [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale down GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 0 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Should scale up GPU pool from 1 [GpuType:] [Feature:ClusterSizeAutoscalingGpu]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] Shouldn't perform scale up operation and should list unhealthy status if most of the cluster is broken[Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should add node to the particular mig [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining multiple pods one by one as dictated by pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down by draining system pods with pdb[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should be able to scale down when rescheduling a pod is required and pdb allows for it[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed and one node is broken [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should correctly scale down after a node is not needed when there is non autoscaled pool[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should disable node pool autoscaling [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and one node is broken [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pending pods are small and there is another node pool that is not autoscaled [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting EmptyDir volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pod requesting volume is pending [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to host port conflict [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should increase cluster size if pods are pending due to pod anti-affinity [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale down when expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up correct target pool [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] should scale up when non expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't be able to scale down when rescheduling a pod is required, but pdb doesn't allow drain[Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't increase cluster size if pending pod is too large [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale down when non expendable pod is running [Feature:ClusterSizeAutoscalingScaleDown]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is created [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't scale up when expendable pod is preempted [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] Cluster size autoscaling [Slow] shouldn't trigger additional scale-ups during processing scale-up [Feature:ClusterSizeAutoscalingScaleUp]
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [DisabledForLargeClusters] kube-dns-autoscaler should scale kube-dns pods in both nonfaulty and faulty scenarios
Kubernetes e2e suite [sig-autoscaling] DNS horizontal autoscaling [Serial] [Slow] kube-dns-autoscaler should scale kube-dns pods when cluster size changed
Kubernetes e2e suite [sig-autoscaling] [Feature:ClusterSizeAutoscalingScaleUp] [Slow] Autoscaling [sig-autoscaling] Autoscaling a service from 1 pod and 3 nodes to 8 pods and >=4 nodes takes less than 15 minutes
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 1 pod to 2 pods
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] ReplicationController light Should scale from 2 pods to 1 pod [Slow]
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] Deployment Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 1 pod to 3 pods and from 3 to 5
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicaSet Should scale from 5 pods to 3 pods and from 3 to 1
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 1 pod to 3 pods and from 3 to 5 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [Feature:HPA] Horizontal pod autoscaling (scale resource: CPU) [sig-autoscaling] [Serial] [Slow] ReplicationController Should scale from 5 pods to 3 pods and from 3 to 1 and verify decision stability
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Object from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with Custom Metric of type Pod from Stackdriver with Prometheus [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target average value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale down with External Metric with target value from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two External metrics from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-autoscaling] [HPA] Horizontal pod autoscaling (scale resource: Custom Metrics from Stackdriver) should scale up with two metrics of type Pod from Stackdriver [Feature:CustomMetricsAutoscaling]
Kubernetes e2e suite [sig-cli] Kubectl alpha client Kubectl run CronJob should create a CronJob
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should remove all the taints with the same key off a node
Kubernetes e2e suite [sig-cli] Kubectl client Kubectl taint [Serial] should update the taint on a node
Kubernetes e2e suite [sig-cluster-lifecycle] Addon update should propagate add-on file changes [Slow]
Kubernetes e2e suite [sig-cluster-lifecycle] Downgrade [Feature:Downgrade] cluster downgrade should maintain a functioning cluster [Feature:ClusterDowngrade]
Kubernetes e2e suite [sig-cluster-lifecycle] HA-master [Feature:HAMaster] survive addition/removal replicas different zones [Serial][Disruptive]
Kubernetes e2e suite [sig-cluster-lifecycle] HA-master [Feature:HAMaster] survive addition/removal replicas multizone workers [Serial][Disruptive]
Kubernetes e2e suite [sig-cluster-lifecycle] HA-master [Feature:HAMaster] survive addition/removal replicas same zone [Serial][Disruptive]
Kubernetes e2e suite [sig-cluster-lifecycle] Nodes [Disruptive] Resize [Slow] should be able to add nodes
Kubernetes e2e suite [sig-cluster-lifecycle] Nodes [Disruptive] Resize [Slow] should be able to delete nodes
Kubernetes e2e suite [sig-cluster-lifecycle] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to cadvisor port 4194 using proxy subresource
Kubernetes e2e suite [sig-cluster-lifecycle] Ports Security Check [Feature:KubeletSecurity] should not be able to proxy to the readonly kubelet port 10255 using proxy subresource
Kubernetes e2e suite [sig-cluster-lifecycle] Ports Security Check [Feature:KubeletSecurity] should not have port 10255 open on its all public IP addresses
Kubernetes e2e suite [sig-cluster-lifecycle] Ports Security Check [Feature:KubeletSecurity] should not have port 4194 open on its all public IP addresses
Kubernetes e2e suite [sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by dropping all inbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by dropping all outbound packets for a while and ensure they function afterwards
Kubernetes e2e suite [sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by ordering clean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by ordering unclean reboot and ensure they function upon restart
Kubernetes e2e suite [sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by switching off the network interface and ensure they function upon switch on
Kubernetes e2e suite [sig-cluster-lifecycle] Reboot [Disruptive] [Feature:Reboot] each node by triggering kernel panic and ensure they function upon restart
Kubernetes e2e suite [sig-cluster-lifecycle] Restart [Disruptive] should restart all nodes and ensure all nodes and pods recover
Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] cluster upgrade should maintain a functioning cluster [Feature:ClusterUpgrade]
Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] master upgrade should maintain a functioning cluster [Feature:MasterUpgrade]
Kubernetes e2e suite [sig-cluster-lifecycle] Upgrade [Feature:Upgrade] node upgrade should maintain a functioning cluster [Feature:NodeUpgrade]
Kubernetes e2e suite [sig-cluster-lifecycle] [Disruptive]NodeLease NodeLease deletion node lease should be deleted when corresponding node is deleted
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the signed bootstrap tokens from clusterInfo ConfigMap when bootstrap token is deleted
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should delete the token secret when the secret expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should not delete the token secret when the secret is not expired
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should resign the bootstrap tokens when the clusterInfo ConfigMap updated [Serial][Disruptive]
Kubernetes e2e suite [sig-cluster-lifecycle] [Feature:BootstrapTokens] should sign the new added bootstrap tokens
Kubernetes e2e suite [sig-cluster-lifecycle] etcd Upgrade [Feature:EtcdUpgrade] etcd upgrade should maintain a functioning cluster
Kubernetes e2e suite [sig-cluster-lifecycle] gpu Upgrade [Feature:GPUUpgrade] cluster downgrade should be able to run gpu pod after downgrade [Feature:GPUClusterDowngrade]
Kubernetes e2e suite [sig-cluster-lifecycle] gpu Upgrade [Feature:GPUUpgrade] cluster upgrade should be able to run gpu pod after upgrade [Feature:GPUClusterUpgrade]
Kubernetes e2e suite [sig-cluster-lifecycle] gpu Upgrade [Feature:GPUUpgrade] master upgrade should NOT disrupt gpu pod [Feature:GPUMasterUpgrade]
Kubernetes e2e suite [sig-cluster-lifecycle] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Downgrade kube-proxy from a DaemonSet to static pods should maintain a functioning cluster [Feature:KubeProxyDaemonSetDowngrade]
Kubernetes e2e suite [sig-cluster-lifecycle] kube-proxy migration [Feature:KubeProxyDaemonSetMigration] Upgrade kube-proxy from static pods to a DaemonSet should maintain a functioning cluster [Feature:KubeProxyDaemonSetUpgrade]
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver [Feature:StackdriverLogging] [Soak] should ingest logs from applications running for a prolonged amount of time
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest events [Feature:StackdriverLogging]
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest logs [Feature:StackdriverLogging]
Kubernetes e2e suite [sig-instrumentation] Cluster level logging implemented by Stackdriver should ingest system logs from all nodes [Feature:StackdriverLogging]
Kubernetes e2e suite [sig-instrumentation] Cluster level logging using Elasticsearch [Feature:Elasticsearch] should check that logs from containers are ingested into Elasticsearch
Kubernetes e2e suite [sig-instrumentation] Kibana Logging Instances Is Alive [Feature:Elasticsearch] should check that the Kibana logging instance is alive
Kubernetes e2e suite [sig-instrumentation] Logging soak [Performance] [Slow] [Disruptive] should survive logging 1KB every 1s seconds, for a duration of 2m0s
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have accelerator metrics [Feature:StackdriverAcceleratorMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should have cluster metrics [Feature:StackdriverMonitoring]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for external metrics [Feature:StackdriverExternalMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for new resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Custom Metrics - Stackdriver Adapter for old resource model [Feature:StackdriverCustomMetrics]
Kubernetes e2e suite [sig-instrumentation] Stackdriver Monitoring should run Stackdriver Metadata Agent [Feature:StackdriverMetadataAgent]
Kubernetes e2e suite [sig-instrumentation] [Feature:PrometheusMonitoring] Prometheus should contain correct container CPU metric.
Kubernetes e2e suite [sig-instrumentation] [Feature:PrometheusMonitoring] Prometheus should scrape container metrics from all nodes.
Kubernetes e2e suite [sig-instrumentation] [Feature:PrometheusMonitoring] Prometheus should scrape metrics from annotated pods.
Kubernetes e2e suite [sig-instrumentation] [Feature:PrometheusMonitoring] Prometheus should scrape metrics from annotated services.
Kubernetes e2e suite [sig-instrumentation] [Feature:PrometheusMonitoring] Prometheus should successfully scrape all targets
Kubernetes e2e suite [sig-network] ClusterDns [Feature:Example] should create pod that uses dns
Kubernetes e2e suite [sig-network] DNS configMap federations [Feature:Federation] should be able to change federation configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [Feature:Networking-IPv6] [LinuxOnly] Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [Feature:Networking-IPv6] [LinuxOnly] Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [Feature:Networking-IPv6] [LinuxOnly] Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [IPv4] Change stubDomain should be able to change stubDomain configuration [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [IPv4] Forward PTR lookup should forward PTR records lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS configMap nameserver [IPv4] Forward external name lookup should forward externalname lookup to upstream nameserver [Slow][Serial]
Kubernetes e2e suite [sig-network] DNS should provide DNS for the cluster [Provider:GCE]
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should handle updates to ExternalTrafficPolicy field
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should only target nodes with endpoints
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work for type=LoadBalancer
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work for type=NodePort
Kubernetes e2e suite [sig-network] ESIPP [Slow] [DisabledForLargeClusters] should work from pods
Kubernetes e2e suite [sig-network] Firewall rule [Slow] [Serial] should create valid firewall rules for LoadBalancer type service
Kubernetes e2e suite [sig-network] Firewall rule should have correct firewall rules for e2e cluster
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] multicluster ingress should get instance group annotation
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should create ingress with pre-shared certificate
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:Ingress] should support multiple TLS certs
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] rolling update backend pods should not cause service disruption
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to create a ClusterIP service
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should be able to switch between IG and NEG modes
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should create NEGs for all ports with the Ingress annotation, and NEGs for the standalone annotation otherwise
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints for both Ingress-referenced NEG and standalone NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:NEG] should sync endpoints to NEG
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should conform to Ingress spec
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should create ingress with backend HTTPS
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should create ingress with pre-shared certificate
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should remove clusters as expected
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] should support https-only annotation
Kubernetes e2e suite [sig-network] Loadbalancing: L7 GCE [Slow] [Feature:kubemci] single and multi-cluster ingresses should be able to exist together
Kubernetes e2e suite [sig-network] Loadbalancing: L7 Scalability GCE [Slow] [Serial] [Feature:IngressScale] Creating and updating ingresses should happen promptly with small/medium/large amount of ingresses
Kubernetes e2e suite [sig-network] Loadbalancing: L7 [Slow] Nginx should conform to Ingress spec
Kubernetes e2e suite [sig-network] Network should resolve connrection reset issue #74839 [Slow]
Kubernetes e2e suite [sig-network] Network should set TCP CLOSE_WAIT timeout
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow egress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow egress access to server in CIDR block [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from namespace on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated namespace [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access from updated pod [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should allow ingress access on one named port [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce egress policy allowing traffic to a server in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple egress policies with egress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple ingress policies with ingress allow-all policy taking precedence [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce multiple, stacked policies with overlapping podSelectors [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policies to check ingress and egress policies can be controlled independently based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on NamespaceSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector or NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on PodSelector with MatchExpressions[Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy based on Ports [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic from pods within server namespace based on PodSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a different namespace, based on NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce policy to allow traffic only from a pod in a different namespace based on PodSelector and NamespaceSelector [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should enforce updated policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should stop enforcing policies after they are deleted [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should support a 'default-deny' policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] NetworkPolicy [LinuxOnly] NetworkPolicy between server and client should support allow-all policy [Feature:NetworkPolicy]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: http [Slow]
Kubernetes e2e suite [sig-network] Networking Granular Checks: Services should update nodePort: udp [Slow]
Kubernetes e2e suite [sig-network] Networking IPerf IPv4 [Experimental] [Feature:Networking-IPv4] [Slow] [Feature:Networking-Performance] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)
Kubernetes e2e suite [sig-network] Networking IPerf IPv6 [Experimental] [Feature:Networking-IPv6] [Slow] [Feature:Networking-Performance] [LinuxOnly] should transfer ~ 1GB onto the service endpoint 1 servers (maximum of 1 clients)
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv4]
Kubernetes e2e suite [sig-network] Networking should provide Internet connection for containers [Feature:Networking-IPv6][Experimental][LinuxOnly]
Kubernetes e2e suite [sig-network] NoSNAT [Feature:NoSNAT] [Slow] Should be able to send traffic between Pods without SNAT
Kubernetes e2e suite [sig-network] Services [Feature:GCEAlphaFeature][Slow] should be able to create and tear down a standard-tier load balancer [Slow]
Kubernetes e2e suite [sig-network] Services should be able to change the type and ports of a service [Slow] [DisabledForLargeClusters]
Kubernetes e2e suite [sig-network] Services should be able to create an internal type load balancer [Slow] [DisabledForLargeClusters]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters]
Kubernetes e2e suite [sig-network] Services should be able to switch session affinity for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters]
Kubernetes e2e suite [sig-network] Services should be able to up and down services
Kubernetes e2e suite [sig-network] Services should handle load balancer cleanup finalizer for service [Slow]
Kubernetes e2e suite [sig-network] Services should have session affinity work for LoadBalancer service with ESIPP off [Slow] [DisabledForLargeClusters]
Kubernetes e2e suite [sig-network] Services should have session affinity work for LoadBalancer service with ESIPP on [Slow] [DisabledForLargeClusters]
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/headless
Kubernetes e2e suite [sig-network] Services should implement service.kubernetes.io/service-proxy-name
Kubernetes e2e suite [sig-network] Services should only allow access from service loadbalancer source ranges [Slow]
Kubernetes e2e suite [sig-network] Services should reconcile LB health check interval [Slow][Serial]
Kubernetes e2e suite [sig-network] Services should work after restarting apiserver [Disruptive]
Kubernetes e2e suite [sig-network] Services should work after restarting kube-proxy [Disruptive]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should be able to reach pod on ipv4 and ipv6 ip [Feature:IPv6DualStackAlphaFeature:Phase2]
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should create pod, add ipv6 and ipv4 ip to pod ips
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should have ipv4 and ipv6 internal node ip
Kubernetes e2e suite [sig-network] [Feature:IPv6DualStackAlphaFeature] [LinuxOnly] should have ipv4 and ipv6 node podCIDRs
Kubernetes e2e suite [sig-network] [Feature:PerformanceDNS][Serial] Should answer DNS query for maximum number of services per cluster
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for intra-pod communication: http
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for intra-pod communication: udp
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for node-pod communication: http
Kubernetes e2e suite [sig-network] [sig-windows] Networking Granular Checks: Pods should function for node-pod communication: udp
Kubernetes e2e suite [sig-scalability] Density [Feature:HighDensityPerformance] should allow starting 95 pods per node using ReplicationController with 0 secrets, 0 configmaps, 0 token projections, and 0 daemons
Kubernetes e2e suite [sig-scalability] Density [Feature:ManualPerformance] should allow starting 100 pods per node using ReplicationController with 0 secrets, 0 configmaps, 0 token projections, and 0 daemons
Kubernetes e2e suite [sig-scalability] Density [Feature:ManualPerformance] should allow starting 3 pods per node using ReplicationController with 0 secrets, 0 configmaps, 0 token projections, and 0 daemons
Kubernetes e2e suite [sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using Deployment.extensions with 0 secrets, 0 configmaps, 0 token projections, and 0 daemons
Kubernetes e2e suite [sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using Deployment.extensions with 0 secrets, 0 configmaps, 2 token projections, and 0 daemons
Kubernetes e2e suite [sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using Deployment.extensions with 0 secrets, 2 configmaps, 0 token projections, and 0 daemons
Kubernetes e2e suite [sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using Deployment.extensions with 2 secrets, 0 configmaps, 0 token projections, and 0 daemons
Kubernetes e2e suite [sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using Job.batch with 0 secrets, 0 configmaps, 0 token projections, and 0 daemons
Kubernetes e2e suite [sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using ReplicationController with 0 secrets, 0 configmaps, 0 token projections, and 0 daemons with quotas
Kubernetes e2e suite [sig-scalability] Density [Feature:ManualPerformance] should allow starting 30 pods per node using ReplicationController with 0 secrets, 0 configmaps, 0 token projections, and 2 daemons
Kubernetes e2e suite [sig-scalability] Density [Feature:ManualPerformance] should allow starting 50 pods per node using ReplicationController with 0 secrets, 0 configmaps, 0 token projections, and 0 daemons
Kubernetes e2e suite [sig-scalability] Density [Feature:Performance] should allow starting 30 pods per node using ReplicationController with 0 secrets, 0 configmaps, 0 token projections, and 0 daemons
Kubernetes e2e suite [sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 3 pods per node ReplicationController with 0 secrets, 0 configmaps and 0 daemons
Kubernetes e2e suite [sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node Deployment.extensions with 0 secrets, 0 configmaps and 0 daemons
Kubernetes e2e suite [sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node Deployment.extensions with 0 secrets, 2 configmaps and 0 daemons
Kubernetes e2e suite [sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node Deployment.extensions with 2 secrets, 0 configmaps and 0 daemons
Kubernetes e2e suite [sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node Job.batch with 0 secrets, 0 configmaps and 0 daemons
Kubernetes e2e suite [sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node Random with 0 secrets, 0 configmaps and 0 daemons
Kubernetes e2e suite [sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node Random with 0 secrets, 0 configmaps and 0 daemons with quotas
Kubernetes e2e suite [sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node ReplicationController with 0 secrets, 0 configmaps and 0 daemons with quotas
Kubernetes e2e suite [sig-scalability] Load capacity [Feature:ManualPerformance] should be able to handle 30 pods per node ReplicationController with 0 secrets, 0 configmaps and 2 daemons
Kubernetes e2e suite [sig-scalability] Load capacity [Feature:Performance] should be able to handle 30 pods per node ReplicationController with 0 secrets, 0 configmaps and 0 daemons
Kubernetes e2e suite [sig-scheduling] GPUDevicePluginAcrossRecreate [Feature:Recreate] run Nvidia GPU Device Plugin tests with a recreation
Kubernetes e2e suite [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage] should only be allowed to provision PDs in zones where nodes exist
Kubernetes e2e suite [sig-scheduling] Multi-AZ Cluster Volumes [sig-storage] should schedule pods in the same zones as statically provisioned PVs
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a replication controller across zones
Kubernetes e2e suite [sig-scheduling] Multi-AZ Clusters should spread the pods of a service across zones
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Conformance]
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] only evicts pods without tolerations from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] doesn't evict pod with tolerations from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] eventually evict pod with finite tolerations from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] evicts pods from tainted nodes
Kubernetes e2e suite [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Conformance]
Kubernetes e2e suite [sig-scheduling] PodPriorityResolution [Serial] validates critical system priorities are created and resolved
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates MaxPods limit number of pods that are allowed to run [Slow]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates local ephemeral storage resource limits of pods that are allowed to run [Feature:LocalStorageCapacityIsolation]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeAffinity is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that required NodeAffinity setting is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that taints-tolerations is respected if not matching
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates basic preemption works
Kubernetes e2e suite [sig-scheduling] SchedulerPreemption [Serial] validates lower priority pod preemption by critical pod
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should avoid nodes that have avoidPod annotation
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be preferably scheduled to nodes pod can tolerate
Kubernetes e2e suite [sig-scheduling] SchedulerPriorities [Serial] Pod should be scheduled to node that don't match the PodAntiAffinity terms
Kubernetes e2e suite [sig-scheduling] TaintBasedEvictions [Serial] Checks that the node becomes unreachable
Kubernetes e2e suite [sig-scheduling] [Feature:GPUDevicePlugin] run Nvidia GPU Device Plugin tests
Kubernetes e2e suite [sig-service-catalog] [Feature:PodPreset] PodPreset should create a pod preset
Kubernetes e2e suite [sig-service-catalog] [Feature:PodPreset] PodPreset should not modify the pod on conflict
Kubernetes e2e suite [sig-storage] CSI Volumes CSI Topology test using GCE PD driver [Serial] should fail to schedule a pod with a zone missing from AllowedTopologies; PD is provisioned with delayed volume binding
Kubernetes e2e suite [sig-storage] CSI Volumes CSI Topology test using GCE PD driver [Serial] should fail to schedule a pod with a zone missing from AllowedTopologies; PD is provisioned with immediate volume binding
Kubernetes e2e suite [sig-storage] CSI Volumes CSI Topology test using GCE PD driver [Serial] should provision zonal PD with delayed volume binding and AllowedTopologies set and mount the volume to a pod
Kubernetes e2e suite [sig-storage] CSI Volumes CSI Topology test using GCE PD driver [Serial] should provision zonal PD with delayed volume binding and mount the volume to a pod
Kubernetes e2e suite [sig-storage] CSI Volumes CSI Topology test using GCE PD driver [Serial] should provision zonal PD with immediate volume binding and AllowedTopologies set and mount the volume to a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Slow][Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Dynamic Snapshot] snapshottable should create snapshot with defaults [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: csi-hostpath] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Slow][Serial]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Dynamic Snapshot] snapshottable should create snapshot with defaults [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to create pod by failing to mount volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should create read-only inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should create read/write inline ephemeral volume
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should support multiple inline ephemeral volumes
Kubernetes e2e suite [sig-storage] CSI Volumes [Driver: pd.csi.storage.gke.io][Serial] [Testpattern: inline ephemeral CSI volume] ephemeral should support two pods which share the same volume
Kubernetes e2e suite [sig-storage] CSI mock volume CSI volume limit information using mock driver should report attach limit when limit is bigger than 0 [Slow]
Kubernetes e2e suite [sig-storage] ConfigMap Should fail non-optional pod creation due to configMap object does not exist [Slow]
Kubernetes e2e suite [sig-storage] ConfigMap Should fail non-optional pod creation due to the key in the configMap object does not exist [Slow]
Kubernetes e2e suite [sig-storage] Detaching volumes should not work when mount is in progress [Slow]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by changing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should be disabled by removing the default annotation [Serial] [Disruptive]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner Default should create and delete default persistent volumes [Slow]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner External should let an external dynamic provisioner create and delete persistent volumes [Slow]
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] deletion should be idempotent
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should not provision a volume in an unmanaged GCE zone.
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should provision storage with different parameters
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should provision storage with non-default reclaim policy Retain
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner [Slow] should test that deleting a claim before the volume is provisioned deletes the volume.
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner delayed binding [Slow] should create persistent volumes in the same zone as node after a pod mounting the claims is started
Kubernetes e2e suite [sig-storage] Dynamic Provisioning DynamicProvisioner delayed binding with allowedTopologies [Slow] should create persistent volumes in the same zone as specified in allowedTopologies after a pod mounting the claims is started
Kubernetes e2e suite [sig-storage] Dynamic Provisioning Invalid AWS KMS key should report an error and create no PV
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]
Kubernetes e2e suite [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for git_repo [Serial] [Slow]
Kubernetes e2e suite [sig-storage] Flexvolumes should be mountable when attachable
Kubernetes e2e suite [sig-storage] Flexvolumes should be mountable when non-attachable
Kubernetes e2e suite [sig-storage] GCP Volumes GlusterFS should be mountable
Kubernetes e2e suite [sig-storage] GCP Volumes NFSv3 should be mountable for NFSv3
Kubernetes e2e suite [sig-storage] GCP Volumes NFSv4 should be mountable for NFSv4
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a file written to the mount before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [sig-storage] GenericPersistentVolume[Disruptive] When kubelet restarts Should test that a volume mounted to a pod that is force deleted while the kubelet is down unmounts when the kubelet returns.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Slow][Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: aws] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] volumeLimits should support volume limits [Slow][Serial]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with mount options
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with pvc data source
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] provisioning should provision storage with snapshot data source [Feature:VolumeSnapshotDataSource]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Dynamic PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Inline-volume (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (block volmode)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should be able to unmount after the subpath directory is deleted
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if non-existent subpath is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath directory is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath file is outside the volume [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should fail if subpath with backstepping is outside the volume [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support creating multiple subpath from same volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directories when readOnly specified in the volumeSource
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing directory
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support existing single file [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support file as subpath [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support non-existent path
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly directory specified in the volumeMount
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support readOnly file specified in the volumeMount [LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using directory as subpath [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should support restarting containers using file as subpath [Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is force deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should unmount if pod is gracefully deleted while kubelet is down [Disruptive][Slow][LinuxOnly]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] subPath should verify container cannot write to subpath readonly volumes [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] volumeIO should write files of various sizes, verify size, validate content [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (default fs)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ext3)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ext3)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ext4)] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ext4)] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (filesystem volmode)] volumeMode should not mount / map unused volumes in a pod
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (ntfs)][sig-windows] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should allow exec of files on the volume
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: azure] [Testpattern: Pre-provisioned PV (xfs)][Slow] volumes should store data
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand Verify if offline PVC expansion works
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)(allowExpansion)] volume-expand should resize volume when PVC is edited while pod is using it
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] disruptive[Disruptive] Should test that pv written before kubelet restart is readable after restart.
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with different volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should access to two volumes with the same volume mode and retain data across pod recreation on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on different node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] multiVolume [Slow] should concurrently access the single volume from pods on the same node
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volume-expand should not allow expansion of pvcs without AllowVolumeExpansion property
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail in binding dynamic provisioned PV to PVC [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volumeMode should fail to use a volume in a pod with mismatched mode [Slow]
Kubernetes e2e suite [sig-storage] In-tree Volumes [Driver: ceph][Feature:Volumes] [Testpattern: Dynamic PV (block volmode)] volum