go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:77
from junit_01.xml
Filter through log files | View test history on testgrid
go run hack/e2e.go -v --test --test_args='--ginkgo.focus=Kubernetes\se2e\ssuite\s\[SynchronizedBeforeSuite\]$'
test/e2e/e2e.go:249 k8s.io/kubernetes/test/e2e.setupSuite() test/e2e/e2e.go:249 +0x4de k8s.io/kubernetes/test/e2e.glob..func1() test/e2e/e2e.go:81 +0x8f reflect.Value.call({0x66a9bc0?, 0x78952d0?, 0x13?}, {0x75b6e72, 0x4}, {0xc0000c8f20, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x66a9bc0?, 0x78952d0?, 0x26cd5ed?}, {0xc0004b6f20?, 0x265bb67?, 0xc0004b6f20?}) /usr/local/go/src/reflect/value.go:368 +0xbcfrom junit_01.xml
[SynchronizedBeforeSuite] TOP-LEVEL test/e2e/e2e.go:77 Nov 25 19:57:24.655: INFO: cluster-control-plane-node-image: cos-97-16919-103-16 Nov 25 19:57:24.655: INFO: cluster-worker-node-image: cos-97-16919-103-16 Nov 25 19:57:24.656: INFO: >>> kubeConfig: /workspace/.kube/config Nov 25 19:57:24.660: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 25 19:57:24.832: INFO: Waiting up to 10m0s for all pods (need at least 8) in namespace 'kube-system' to be running and ready Nov 25 19:58:19.550: INFO: Encountered non-retryable error while listing replication controllers in namespace kube-system: Get "https://34.127.41.66/api/v1/namespaces/kube-system/replicationcontrollers": net/http: TLS handshake timeout - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug="" STEP: Collecting events from namespace "kube-system". 11/25/22 19:58:19.55 STEP: Found 183 events. 11/25/22 19:58:26.701 Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:40 +0000 UTC - event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043" already present on machine Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:40 +0000 UTC - event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-controller-manager Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:40 +0000 UTC - event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043" already present on machine Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:40 +0000 UTC - event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container kube-scheduler Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:44 +0000 UTC - event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-controller-manager Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:46 +0000 UTC - event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container etcd-container Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:47 +0000 UTC - event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container etcd-container Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:47 +0000 UTC - event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container kube-scheduler Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:56 +0000 UTC - event for kube-controller-manager: {kube-controller-manager } LeaderElection: bootstrap-e2e-master_606b3498-09bf-4ebf-b049-c9dfc544f180 became leader Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:57 +0000 UTC - event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_39dd874c-c447-4c78-a338-f8b147d8fee8 became leader Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:59 +0000 UTC - event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Started: Started container l7-lb-controller Nov 25 19:58:26.701: INFO: At 2022-11-25 19:54:59 +0000 UTC - event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Created: Created container l7-lb-controller Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:02 +0000 UTC - event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1" already present on machine Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:03 +0000 UTC - event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container l7-lb-controller in pod l7-lb-controller-bootstrap-e2e-master_kube-system(f922c87738da4b787ed79e8be7ae0573) Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:13 +0000 UTC - event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6d97d5ddb to 1 Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:13 +0000 UTC - event for coredns-6d97d5ddb: {replicaset-controller } FailedCreate: Error creating: insufficient quota to match these scopes: [{PriorityClass In [system-node-critical system-cluster-critical]}] Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:13 +0000 UTC - event for kube-dns-autoscaler: {deployment-controller } ScalingReplicaSet: Scaled up replica set kube-dns-autoscaler-5f6455f985 to 1 Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:13 +0000 UTC - event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } FailedCreate: Error creating: pods "kube-dns-autoscaler-5f6455f985-" is forbidden: error looking up service account kube-system/kube-dns-autoscaler: serviceaccount "kube-dns-autoscaler" not found Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:13 +0000 UTC - event for l7-default-backend: {deployment-controller } ScalingReplicaSet: Scaled up replica set l7-default-backend-8549d69d99 to 1 Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:13 +0000 UTC - event for l7-default-backend-8549d69d99: {replicaset-controller } SuccessfulCreate: Created pod: l7-default-backend-8549d69d99-nvg98 Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:13 +0000 UTC - event for l7-default-backend-8549d69d99-nvg98: {default-scheduler } FailedScheduling: no nodes available to schedule pods Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:16 +0000 UTC - event for kube-dns-autoscaler-5f6455f985: {replicaset-controller } SuccessfulCreate: Created pod: kube-dns-autoscaler-5f6455f985-s5vgm Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:16 +0000 UTC - event for kube-dns-autoscaler-5f6455f985-s5vgm: {default-scheduler } FailedScheduling: no nodes available to schedule pods Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:16 +0000 UTC - event for metrics-server-v0.5.2: {deployment-controller } ScalingReplicaSet: Scaled up replica set metrics-server-v0.5.2-6764bf875c to 1 Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:16 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c: {replicaset-controller } SuccessfulCreate: Created pod: metrics-server-v0.5.2-6764bf875c-fr576 Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:16 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {default-scheduler } FailedScheduling: no nodes available to schedule pods Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:16 +0000 UTC - event for volume-snapshot-controller: {statefulset-controller } SuccessfulCreate: create Pod volume-snapshot-controller-0 in StatefulSet volume-snapshot-controller successful Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:16 +0000 UTC - event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: no nodes available to schedule pods Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:18 +0000 UTC - event for coredns-6d97d5ddb: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6d97d5ddb-j7sb8 Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:18 +0000 UTC - event for coredns-6d97d5ddb-j7sb8: {default-scheduler } FailedScheduling: no nodes available to schedule pods Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:19 +0000 UTC - event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-zl6p2 Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:19 +0000 UTC - event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-prrpq Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:19 +0000 UTC - event for metadata-proxy-v0.1-prrpq: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-prrpq to bootstrap-e2e-minion-group-blng Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:19 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-zl6p2 to bootstrap-e2e-master Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for coredns-6d97d5ddb-j7sb8: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for kube-dns-autoscaler-5f6455f985-s5vgm: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-blng: {kubelet bootstrap-e2e-minion-group-blng} Started: Started container kube-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-blng: {kubelet bootstrap-e2e-minion-group-blng} Created: Created container kube-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-blng: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043" already present on machine Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for l7-default-backend-8549d69d99-nvg98: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-vmkwn Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for metadata-proxy-v0.1: {daemonset-controller } SuccessfulCreate: Created pod: metadata-proxy-v0.1-cdhvh Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for metadata-proxy-v0.1-cdhvh: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-cdhvh to bootstrap-e2e-minion-group-lmt7 Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for metadata-proxy-v0.1-vmkwn: {default-scheduler } Scheduled: Successfully assigned kube-system/metadata-proxy-v0.1-vmkwn to bootstrap-e2e-minion-group-2zvh Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:20 +0000 UTC - event for volume-snapshot-controller-0: {default-scheduler } FailedScheduling: 0/2 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/master: }, 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }, 1 node(s) were unschedulable. preemption: 0/2 nodes are available: 2 Preemption is not helpful for scheduling.. Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:21 +0000 UTC - event for ingress-gce-lock: {loadbalancer-controller } LeaderElection: bootstrap-e2e-master_7b49e became leader Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:21 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-blng: {kubelet bootstrap-e2e-minion-group-blng} Killing: Stopping container kube-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:21 +0000 UTC - event for metadata-proxy-v0.1-prrpq: {kubelet bootstrap-e2e-minion-group-blng} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-2zvh: {kubelet bootstrap-e2e-minion-group-2zvh} Killing: Stopping container kube-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-2zvh: {kubelet bootstrap-e2e-minion-group-2zvh} Started: Started container kube-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-2zvh: {kubelet bootstrap-e2e-minion-group-2zvh} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043" already present on machine Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-2zvh: {kubelet bootstrap-e2e-minion-group-2zvh} Created: Created container kube-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-blng: {kubelet bootstrap-e2e-minion-group-blng} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-lmt7: {kubelet bootstrap-e2e-minion-group-lmt7} Started: Started container kube-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-lmt7: {kubelet bootstrap-e2e-minion-group-lmt7} Killing: Stopping container kube-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-lmt7: {kubelet bootstrap-e2e-minion-group-lmt7} Pulled: Container image "registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043" already present on machine Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-lmt7: {kubelet bootstrap-e2e-minion-group-lmt7} Created: Created container kube-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for metadata-proxy-v0.1-cdhvh: {kubelet bootstrap-e2e-minion-group-lmt7} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for metadata-proxy-v0.1-prrpq: {kubelet bootstrap-e2e-minion-group-blng} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for metadata-proxy-v0.1-prrpq: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 667.398506ms (667.438218ms including waiting) Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for metadata-proxy-v0.1-prrpq: {kubelet bootstrap-e2e-minion-group-blng} Created: Created container metadata-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for metadata-proxy-v0.1-prrpq: {kubelet bootstrap-e2e-minion-group-blng} Started: Started container metadata-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:22 +0000 UTC - event for metadata-proxy-v0.1-vmkwn: {kubelet bootstrap-e2e-minion-group-2zvh} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:23 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-2zvh: {kubelet bootstrap-e2e-minion-group-2zvh} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:23 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-lmt7: {kubelet bootstrap-e2e-minion-group-lmt7} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:23 +0000 UTC - event for metadata-proxy-v0.1-cdhvh: {kubelet bootstrap-e2e-minion-group-lmt7} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:23 +0000 UTC - event for metadata-proxy-v0.1-cdhvh: {kubelet bootstrap-e2e-minion-group-lmt7} Started: Started container metadata-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:23 +0000 UTC - event for metadata-proxy-v0.1-cdhvh: {kubelet bootstrap-e2e-minion-group-lmt7} Created: Created container metadata-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:23 +0000 UTC - event for metadata-proxy-v0.1-cdhvh: {kubelet bootstrap-e2e-minion-group-lmt7} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 710.711929ms (710.721225ms including waiting) Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:23 +0000 UTC - event for metadata-proxy-v0.1-vmkwn: {kubelet bootstrap-e2e-minion-group-2zvh} Created: Created container metadata-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:23 +0000 UTC - event for metadata-proxy-v0.1-vmkwn: {kubelet bootstrap-e2e-minion-group-2zvh} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 706.984038ms (706.995376ms including waiting) Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:23 +0000 UTC - event for metadata-proxy-v0.1-vmkwn: {kubelet bootstrap-e2e-minion-group-2zvh} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:23 +0000 UTC - event for metadata-proxy-v0.1-vmkwn: {kubelet bootstrap-e2e-minion-group-2zvh} Started: Started container metadata-proxy Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:24 +0000 UTC - event for metadata-proxy-v0.1-prrpq: {kubelet bootstrap-e2e-minion-group-blng} Created: Created container prometheus-to-sd-exporter Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:24 +0000 UTC - event for metadata-proxy-v0.1-prrpq: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.769803068s (1.769823298s including waiting) Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:24 +0000 UTC - event for metadata-proxy-v0.1-prrpq: {kubelet bootstrap-e2e-minion-group-blng} Started: Started container prometheus-to-sd-exporter Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for coredns-6d97d5ddb-j7sb8: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6d97d5ddb-j7sb8 to bootstrap-e2e-minion-group-blng Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-qv9km Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for konnectivity-agent-qv9km: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-qv9km to bootstrap-e2e-minion-group-blng Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for kube-dns-autoscaler-5f6455f985-s5vgm: {default-scheduler } Scheduled: Successfully assigned kube-system/kube-dns-autoscaler-5f6455f985-s5vgm to bootstrap-e2e-minion-group-blng Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for l7-default-backend-8549d69d99-nvg98: {default-scheduler } Scheduled: Successfully assigned kube-system/l7-default-backend-8549d69d99-nvg98 to bootstrap-e2e-minion-group-blng Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for metadata-proxy-v0.1-cdhvh: {kubelet bootstrap-e2e-minion-group-lmt7} Created: Created container prometheus-to-sd-exporter Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for metadata-proxy-v0.1-cdhvh: {kubelet bootstrap-e2e-minion-group-lmt7} Started: Started container prometheus-to-sd-exporter Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for metadata-proxy-v0.1-cdhvh: {kubelet bootstrap-e2e-minion-group-lmt7} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.939401033s (1.9394112s including waiting) Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for metadata-proxy-v0.1-vmkwn: {kubelet bootstrap-e2e-minion-group-2zvh} Created: Created container prometheus-to-sd-exporter Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for metadata-proxy-v0.1-vmkwn: {kubelet bootstrap-e2e-minion-group-2zvh} Started: Started container prometheus-to-sd-exporter Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for metadata-proxy-v0.1-vmkwn: {kubelet bootstrap-e2e-minion-group-2zvh} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 1.909547394s (1.909565047s including waiting) Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {default-scheduler } Scheduled: Successfully assigned kube-system/metrics-server-v0.5.2-6764bf875c-fr576 to bootstrap-e2e-minion-group-blng Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:25 +0000 UTC - event for volume-snapshot-controller-0: {default-scheduler } Scheduled: Successfully assigned kube-system/volume-snapshot-controller-0 to bootstrap-e2e-minion-group-blng Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:26 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Pulling: Pulling image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:27 +0000 UTC - event for l7-default-backend-8549d69d99-nvg98: {kubelet bootstrap-e2e-minion-group-blng} Pulling: Pulling image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:28 +0000 UTC - event for coredns-6d97d5ddb-j7sb8: {kubelet bootstrap-e2e-minion-group-blng} Pulling: Pulling image "registry.k8s.io/coredns/coredns:v1.9.3" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:28 +0000 UTC - event for kube-dns-autoscaler-5f6455f985-s5vgm: {kubelet bootstrap-e2e-minion-group-blng} Pulling: Pulling image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:28 +0000 UTC - event for l7-default-backend-8549d69d99-nvg98: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Successfully pulled image "registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11" in 879.455039ms (879.464536ms including waiting) Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:28 +0000 UTC - event for l7-default-backend-8549d69d99-nvg98: {kubelet bootstrap-e2e-minion-group-blng} Created: Created container default-http-backend Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:28 +0000 UTC - event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-blng} Pulling: Pulling image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:29 +0000 UTC - event for konnectivity-agent-qv9km: {kubelet bootstrap-e2e-minion-group-blng} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33" Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:29 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Created: Created container metrics-server Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:29 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Successfully pulled image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" in 2.208602016s (2.208611809s including waiting) Nov 25 19:58:26.701: INFO: At 2022-11-25 19:55:30 +0000 UTC - event for konnectivity-agent-qv9km: {kubelet bootstrap-e2e-minion-group-blng} Created: Created container konnectivity-agent Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:30 +0000 UTC - event for konnectivity-agent-qv9km: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33" in 1.504831703s (1.504841085s including waiting) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:30 +0000 UTC - event for konnectivity-agent-qv9km: {kubelet bootstrap-e2e-minion-group-blng} Started: Started container konnectivity-agent Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:31 +0000 UTC - event for coredns-6d97d5ddb-j7sb8: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Successfully pulled image "registry.k8s.io/coredns/coredns:v1.9.3" in 2.495644409s (2.495653774s including waiting) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:31 +0000 UTC - event for coredns-6d97d5ddb-j7sb8: {kubelet bootstrap-e2e-minion-group-blng} Created: Created container coredns Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:31 +0000 UTC - event for kube-dns-autoscaler-5f6455f985-s5vgm: {kubelet bootstrap-e2e-minion-group-blng} Created: Created container autoscaler Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:31 +0000 UTC - event for kube-dns-autoscaler-5f6455f985-s5vgm: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Successfully pulled image "registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4" in 2.635501424s (2.6355103s including waiting) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:31 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Started: Started container metrics-server Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:31 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Pulling: Pulling image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:31 +0000 UTC - event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-blng} Started: Started container volume-snapshot-controller Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:31 +0000 UTC - event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Successfully pulled image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" in 2.747592235s (2.747603558s including waiting) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:31 +0000 UTC - event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-blng} Created: Created container volume-snapshot-controller Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:32 +0000 UTC - event for coredns-6d97d5ddb-j7sb8: {kubelet bootstrap-e2e-minion-group-blng} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:32 +0000 UTC - event for coredns-6d97d5ddb-j7sb8: {kubelet bootstrap-e2e-minion-group-blng} Started: Started container coredns Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:32 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Created: Created container metrics-server-nanny Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:32 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Successfully pulled image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" in 1.040928381s (1.040939928s including waiting) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:32 +0000 UTC - event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-blng} Killing: Stopping container volume-snapshot-controller Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:33 +0000 UTC - event for coredns: {deployment-controller } ScalingReplicaSet: Scaled up replica set coredns-6d97d5ddb to 2 from 1 Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:33 +0000 UTC - event for coredns-6d97d5ddb: {replicaset-controller } SuccessfulCreate: Created pod: coredns-6d97d5ddb-g2nmr Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:33 +0000 UTC - event for coredns-6d97d5ddb-g2nmr: {default-scheduler } Scheduled: Successfully assigned kube-system/coredns-6d97d5ddb-g2nmr to bootstrap-e2e-minion-group-blng Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:33 +0000 UTC - event for kube-dns-autoscaler-5f6455f985-s5vgm: {kubelet bootstrap-e2e-minion-group-blng} Started: Started container autoscaler Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:35 +0000 UTC - event for coredns-6d97d5ddb-g2nmr: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Container image "registry.k8s.io/coredns/coredns:v1.9.3" already present on machine Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:36 +0000 UTC - event for coredns-6d97d5ddb-g2nmr: {kubelet bootstrap-e2e-minion-group-blng} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 503 Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:36 +0000 UTC - event for coredns-6d97d5ddb-g2nmr: {kubelet bootstrap-e2e-minion-group-blng} Killing: Stopping container coredns Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:36 +0000 UTC - event for coredns-6d97d5ddb-g2nmr: {kubelet bootstrap-e2e-minion-group-blng} Created: Created container coredns Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:36 +0000 UTC - event for coredns-6d97d5ddb-g2nmr: {kubelet bootstrap-e2e-minion-group-blng} Started: Started container coredns Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:36 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Killing: Stopping container metrics-server Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:36 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Killing: Stopping container metrics-server-nanny Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:36 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:36 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Started: Started container metrics-server-nanny Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:36 +0000 UTC - event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-blng} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:37 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:37 +0000 UTC - event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Container image "registry.k8s.io/sig-storage/snapshot-controller:v6.1.0" already present on machine Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:38 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Container image "registry.k8s.io/metrics-server/metrics-server:v0.5.2" already present on machine Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:38 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} Pulled: Container image "registry.k8s.io/autoscaling/addon-resizer:1.8.14" already present on machine Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:39 +0000 UTC - event for volume-snapshot-controller-0: {kubelet bootstrap-e2e-minion-group-blng} BackOff: Back-off restarting failed container volume-snapshot-controller in pod volume-snapshot-controller-0_kube-system(14d48aba-68fe-4e13-8c5d-6d55b5291908) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:40 +0000 UTC - event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-vkqgg Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:40 +0000 UTC - event for konnectivity-agent: {daemonset-controller } SuccessfulCreate: Created pod: konnectivity-agent-zqf7j Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:40 +0000 UTC - event for konnectivity-agent-vkqgg: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-vkqgg to bootstrap-e2e-minion-group-lmt7 Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:40 +0000 UTC - event for konnectivity-agent-zqf7j: {default-scheduler } Scheduled: Successfully assigned kube-system/konnectivity-agent-zqf7j to bootstrap-e2e-minion-group-2zvh Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:41 +0000 UTC - event for konnectivity-agent-vkqgg: {kubelet bootstrap-e2e-minion-group-lmt7} Created: Created container konnectivity-agent Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:41 +0000 UTC - event for konnectivity-agent-vkqgg: {kubelet bootstrap-e2e-minion-group-lmt7} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33" in 586.457844ms (586.473279ms including waiting) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:41 +0000 UTC - event for konnectivity-agent-vkqgg: {kubelet bootstrap-e2e-minion-group-lmt7} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33" Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:41 +0000 UTC - event for konnectivity-agent-vkqgg: {kubelet bootstrap-e2e-minion-group-lmt7} Started: Started container konnectivity-agent Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:41 +0000 UTC - event for konnectivity-agent-zqf7j: {kubelet bootstrap-e2e-minion-group-2zvh} Started: Started container konnectivity-agent Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:41 +0000 UTC - event for konnectivity-agent-zqf7j: {kubelet bootstrap-e2e-minion-group-2zvh} Pulling: Pulling image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33" Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:41 +0000 UTC - event for konnectivity-agent-zqf7j: {kubelet bootstrap-e2e-minion-group-2zvh} Pulled: Successfully pulled image "registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33" in 587.282402ms (587.303359ms including waiting) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:41 +0000 UTC - event for konnectivity-agent-zqf7j: {kubelet bootstrap-e2e-minion-group-2zvh} Created: Created container konnectivity-agent Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:43 +0000 UTC - event for coredns-6d97d5ddb-g2nmr: {kubelet bootstrap-e2e-minion-group-blng} Unhealthy: Readiness probe failed: Get "http://10.64.0.8:8181/ready": dial tcp 10.64.0.8:8181: connect: connection refused Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:44 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} BackOff: Back-off restarting failed container metrics-server in pod metrics-server-v0.5.2-6764bf875c-fr576_kube-system(0c711710-c66c-4b36-be5e-2c594ec20293) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:44 +0000 UTC - event for metrics-server-v0.5.2-6764bf875c-fr576: {kubelet bootstrap-e2e-minion-group-blng} BackOff: Back-off restarting failed container metrics-server-nanny in pod metrics-server-v0.5.2-6764bf875c-fr576_kube-system(0c711710-c66c-4b36-be5e-2c594ec20293) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:46 +0000 UTC - event for coredns-6d97d5ddb-g2nmr: {kubelet bootstrap-e2e-minion-group-blng} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:49 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Pulling: Pulling image "registry.k8s.io/metadata-proxy:v0.1.12" Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:50 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "registry.k8s.io/metadata-proxy:v0.1.12" in 651.734467ms (651.740936ms including waiting) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:50 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:50 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Pulling: Pulling image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:52 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Pulled: Successfully pulled image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" in 2.083427424s (2.083448902s including waiting) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:52 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Failed: Error: services have not yet been read at least once, cannot construct envvars Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:53 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Pulled: Container image "gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1" already present on machine Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:53 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/metadata-proxy:v0.1.12" already present on machine Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:59 +0000 UTC - event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container etcd-container Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:59 +0000 UTC - event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: HTTP probe failed with statuscode: 500 Nov 25 19:58:26.702: INFO: At 2022-11-25 19:55:59 +0000 UTC - event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-apiserver Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:02 +0000 UTC - event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe errored: rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:05 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Started: Started container metadata-proxy Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:05 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Created: Created container prometheus-to-sd-exporter Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:05 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Created: Created container metadata-proxy Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:06 +0000 UTC - event for metadata-proxy-v0.1-zl6p2: {kubelet bootstrap-e2e-master} Started: Started container prometheus-to-sd-exporter Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:10 +0000 UTC - event for kube-scheduler: {default-scheduler } LeaderElection: bootstrap-e2e-master_39dd874c-c447-4c78-a338-f8b147d8fee8 stopped leading Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:13 +0000 UTC - event for l7-default-backend-8549d69d99-nvg98: {kubelet bootstrap-e2e-minion-group-blng} Started: Started container default-http-backend Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:19 +0000 UTC - event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": dial tcp 127.0.0.1:10257: connect: connection refused Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:20 +0000 UTC - event for kube-apiserver-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Readiness probe failed: Get "https://127.0.0.1:443/readyz": dial tcp 127.0.0.1:443: connect: connection refused Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:21 +0000 UTC - event for kube-scheduler-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10259/healthz": dial tcp 127.0.0.1:10259: connect: connection refused Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:30 +0000 UTC - event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} SandboxChanged: Pod sandbox changed, it will be killed and re-created. Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:32 +0000 UTC - event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} BackOff: Back-off restarting failed container kube-controller-manager in pod kube-controller-manager-bootstrap-e2e-master_kube-system(810dc9396ddb75d9f933d35b5d652dd7) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:33 +0000 UTC - event for etcd-server-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Pulled: Container image "registry.k8s.io/etcd:3.5.6-0" already present on machine Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:34 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-2zvh: {kubelet bootstrap-e2e-minion-group-2zvh} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-2zvh_kube-system(49dfa7e32d052363582f3c5867843e68) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:56:48 +0000 UTC - event for kube-proxy-bootstrap-e2e-minion-group-lmt7: {kubelet bootstrap-e2e-minion-group-lmt7} BackOff: Back-off restarting failed container kube-proxy in pod kube-proxy-bootstrap-e2e-minion-group-lmt7_kube-system(faba9a8f8fc513df1f66aa3e14f745a8) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:57:23 +0000 UTC - event for coredns-6d97d5ddb-g2nmr: {kubelet bootstrap-e2e-minion-group-blng} Unhealthy: Readiness probe failed: Get "http://10.64.0.13:8181/ready": dial tcp 10.64.0.13:8181: connect: connection refused Nov 25 19:58:26.702: INFO: At 2022-11-25 19:57:25 +0000 UTC - event for coredns-6d97d5ddb-g2nmr: {kubelet bootstrap-e2e-minion-group-blng} BackOff: Back-off restarting failed container coredns in pod coredns-6d97d5ddb-g2nmr_kube-system(c73e51df-e568-475b-a36c-af4274b250b9) Nov 25 19:58:26.702: INFO: At 2022-11-25 19:57:39 +0000 UTC - event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Killing: Stopping container kube-controller-manager Nov 25 19:58:26.702: INFO: At 2022-11-25 19:57:39 +0000 UTC - event for kube-controller-manager-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "https://127.0.0.1:10257/healthz": read tcp 127.0.0.1:58584->127.0.0.1:10257: read: connection reset by peer Nov 25 19:58:26.702: INFO: At 2022-11-25 19:57:50 +0000 UTC - event for l7-lb-controller-bootstrap-e2e-master: {kubelet bootstrap-e2e-master} Unhealthy: Liveness probe failed: Get "http://10.138.0.2:8086/healthz": dial tcp 10.138.0.2:8086: connect: connection refused Nov 25 19:58:27.003: INFO: POD NODE PHASE GRACE CONDITIONS Nov 25 19:58:27.003: INFO: coredns-6d97d5ddb-g2nmr bootstrap-e2e-minion-group-blng Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:33 +0000 UTC }] Nov 25 19:58:27.003: INFO: coredns-6d97d5ddb-j7sb8 bootstrap-e2e-minion-group-blng Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC }] Nov 25 19:58:27.003: INFO: konnectivity-agent-qv9km bootstrap-e2e-minion-group-blng Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC }] Nov 25 19:58:27.003: INFO: konnectivity-agent-vkqgg bootstrap-e2e-minion-group-lmt7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:40 +0000 UTC }] Nov 25 19:58:27.003: INFO: konnectivity-agent-zqf7j bootstrap-e2e-minion-group-2zvh Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:40 +0000 UTC }] Nov 25 19:58:27.003: INFO: kube-dns-autoscaler-5f6455f985-s5vgm bootstrap-e2e-minion-group-blng Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:33 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:33 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC }] Nov 25 19:58:27.003: INFO: kube-proxy-bootstrap-e2e-minion-group-2zvh bootstrap-e2e-minion-group-2zvh Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:56:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:56:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:20 +0000 UTC }] Nov 25 19:58:27.003: INFO: kube-proxy-bootstrap-e2e-minion-group-blng bootstrap-e2e-minion-group-blng Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:19 +0000 UTC }] Nov 25 19:58:27.003: INFO: kube-proxy-bootstrap-e2e-minion-group-lmt7 bootstrap-e2e-minion-group-lmt7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:57:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:57:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:21 +0000 UTC }] Nov 25 19:58:27.003: INFO: l7-default-backend-8549d69d99-nvg98 bootstrap-e2e-minion-group-blng Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC ContainersNotReady containers with unready status: [default-http-backend]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC }] Nov 25 19:58:27.003: INFO: metadata-proxy-v0.1-cdhvh bootstrap-e2e-minion-group-lmt7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:20 +0000 UTC }] Nov 25 19:58:27.003: INFO: metadata-proxy-v0.1-prrpq bootstrap-e2e-minion-group-blng Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:20 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:19 +0000 UTC }] Nov 25 19:58:27.003: INFO: metadata-proxy-v0.1-vmkwn bootstrap-e2e-minion-group-2zvh Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:21 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:20 +0000 UTC }] Nov 25 19:58:27.003: INFO: metadata-proxy-v0.1-zl6p2 bootstrap-e2e-master Pending [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:49 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:49 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:49 +0000 UTC ContainersNotReady containers with unready status: [metadata-proxy prometheus-to-sd-exporter]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:19 +0000 UTC }] Nov 25 19:58:27.003: INFO: metrics-server-v0.5.2-6764bf875c-fr576 bootstrap-e2e-minion-group-blng Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:56:21 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:56:21 +0000 UTC ContainersNotReady containers with unready status: [metrics-server]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC }] Nov 25 19:58:27.003: INFO: volume-snapshot-controller-0 bootstrap-e2e-minion-group-blng Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2022-11-25 19:55:25 +0000 UTC }] Nov 25 19:58:27.003: INFO: Nov 25 19:58:27.572: INFO: Logging node info for node bootstrap-e2e-master Nov 25 19:58:27.615: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-master f7112e69-7237-4219-b541-900c23e54bcc 700 0 2022-11-25 19:55:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-1 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-master kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-1 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 19:55:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{},"f:unschedulable":{}}} } {kube-controller-manager Update v1 2022-11-25 19:55:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 19:55:41 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.1.0/24\"":{}},"f:taints":{}}} } {kubelet Update v1 2022-11-25 19:58:26 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.1.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-serial-1-2/us-west1-b/bootstrap-e2e-master,Unschedulable:true,Taints:[]Taint{Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,},Taint{Key:node.kubernetes.io/unschedulable,Value:,Effect:NoSchedule,TimeAdded:<nil>,},},ConfigSource:nil,PodCIDRs:[10.64.1.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{16656896000 0} {<nil>} 16266500Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3858374656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{1 0} {<nil>} 1 DecimalSI},ephemeral-storage: {{14991206376 0} {<nil>} 14991206376 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{3596230656 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 19:55:40 +0000 UTC,LastTransitionTime:2022-11-25 19:55:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:58:26 +0000 UTC,LastTransitionTime:2022-11-25 19:55:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:58:26 +0000 UTC,LastTransitionTime:2022-11-25 19:55:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:58:26 +0000 UTC,LastTransitionTime:2022-11-25 19:55:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:58:26 +0000 UTC,LastTransitionTime:2022-11-25 19:55:40 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.2,},NodeAddress{Type:ExternalIP,Address:34.127.41.66,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-serial-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-master.c.k8s-jkns-e2e-gce-serial-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ceaf667f6b5e1324cd116eb2db802512,SystemUUID:ceaf667f-6b5e-1324-cd11-6eb2db802512,BootID:98b83aa6-923e-41d0-9725-b0c25fff03d2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-apiserver-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:135160275,},ContainerImage{Names:[registry.k8s.io/kube-controller-manager-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:124989749,},ContainerImage{Names:[registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c registry.k8s.io/etcd:3.5.6-0],SizeBytes:102542580,},ContainerImage{Names:[registry.k8s.io/kube-scheduler-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:57659704,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64@sha256:5db27383add6d9f4ebdf0286409ac31f7f5d273690204b341a4e37998917693b gcr.io/k8s-ingress-image-push/ingress-gce-glbc-amd64:v1.20.1],SizeBytes:36598135,},ContainerImage{Names:[registry.k8s.io/addon-manager/kube-addon-manager@sha256:49cc4e6e4a3745b427ce14b0141476ab339bb65c6bc05033019e046c8727dcb0 registry.k8s.io/addon-manager/kube-addon-manager:v9.1.6],SizeBytes:30464183,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-server@sha256:2c111f004bec24888d8cfa2a812a38fb8341350abac67dcd0ac64e709dfe389c registry.k8s.io/kas-network-proxy/proxy-server:v0.0.33],SizeBytes:22020129,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:58:27.615: INFO: Logging kubelet events for node bootstrap-e2e-master Nov 25 19:58:27.660: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-master Nov 25 19:58:27.745: INFO: etcd-server-events-bootstrap-e2e-master started at 2022-11-25 19:54:36 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:27.745: INFO: Container etcd-container ready: true, restart count 0 Nov 25 19:58:27.745: INFO: etcd-server-bootstrap-e2e-master started at 2022-11-25 19:54:36 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:27.745: INFO: Container etcd-container ready: true, restart count 3 Nov 25 19:58:27.745: INFO: kube-controller-manager-bootstrap-e2e-master started at 2022-11-25 19:54:36 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:27.745: INFO: Container kube-controller-manager ready: false, restart count 3 Nov 25 19:58:27.745: INFO: kube-addon-manager-bootstrap-e2e-master started at 2022-11-25 19:54:52 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:27.745: INFO: Container kube-addon-manager ready: true, restart count 0 Nov 25 19:58:27.745: INFO: konnectivity-server-bootstrap-e2e-master started at 2022-11-25 19:54:36 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:27.745: INFO: Container konnectivity-server-container ready: true, restart count 0 Nov 25 19:58:27.745: INFO: kube-apiserver-bootstrap-e2e-master started at 2022-11-25 19:54:36 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:27.745: INFO: Container kube-apiserver ready: false, restart count 2 Nov 25 19:58:27.745: INFO: kube-scheduler-bootstrap-e2e-master started at 2022-11-25 19:54:36 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:27.745: INFO: Container kube-scheduler ready: true, restart count 1 Nov 25 19:58:27.745: INFO: l7-lb-controller-bootstrap-e2e-master started at 2022-11-25 19:54:52 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:27.745: INFO: Container l7-lb-controller ready: false, restart count 3 Nov 25 19:58:27.745: INFO: metadata-proxy-v0.1-zl6p2 started at 2022-11-25 19:55:49 +0000 UTC (0+2 container statuses recorded) Nov 25 19:58:27.745: INFO: Container metadata-proxy ready: false, restart count 0 Nov 25 19:58:27.745: INFO: Container prometheus-to-sd-exporter ready: false, restart count 0 Nov 25 19:58:27.951: INFO: Latency metrics for node bootstrap-e2e-master Nov 25 19:58:27.951: INFO: Logging node info for node bootstrap-e2e-minion-group-2zvh Nov 25 19:58:28.091: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-2zvh bbd49c2b-24f2-443c-9bf6-09c641f3cc1d 665 0 2022-11-25 19:55:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-2zvh kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 19:55:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 19:55:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.3.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 19:55:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 19:55:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 19:55:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.3.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-serial-1-2/us-west1-b/bootstrap-e2e-minion-group-2zvh,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.3.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 19:55:40 +0000 UTC,LastTransitionTime:2022-11-25 19:55:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:55:51 +0000 UTC,LastTransitionTime:2022-11-25 19:55:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:55:51 +0000 UTC,LastTransitionTime:2022-11-25 19:55:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:55:51 +0000 UTC,LastTransitionTime:2022-11-25 19:55:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:55:51 +0000 UTC,LastTransitionTime:2022-11-25 19:55:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.4,},NodeAddress{Type:ExternalIP,Address:34.168.48.57,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-2zvh.c.k8s-jkns-e2e-gce-serial-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-2zvh.c.k8s-jkns-e2e-gce-serial-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:999d83fb5997d909dbf4e49780178930,SystemUUID:999d83fb-5997-d909-dbf4-e49780178930,BootID:29fd1288-d3ad-4a7e-9eee-e70b386604bd,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:58:28.092: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-2zvh Nov 25 19:58:28.222: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-2zvh Nov 25 19:58:28.425: INFO: kube-proxy-bootstrap-e2e-minion-group-2zvh started at 2022-11-25 19:55:20 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:28.425: INFO: Container kube-proxy ready: true, restart count 2 Nov 25 19:58:28.425: INFO: metadata-proxy-v0.1-vmkwn started at 2022-11-25 19:55:21 +0000 UTC (0+2 container statuses recorded) Nov 25 19:58:28.425: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:58:28.425: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:58:28.425: INFO: konnectivity-agent-zqf7j started at 2022-11-25 19:55:40 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:28.425: INFO: Container konnectivity-agent ready: true, restart count 1 Nov 25 19:58:28.762: INFO: Latency metrics for node bootstrap-e2e-minion-group-2zvh Nov 25 19:58:28.762: INFO: Logging node info for node bootstrap-e2e-minion-group-blng Nov 25 19:58:28.876: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-blng 316911a7-b01e-457d-b1be-9bf90e9c2ba1 656 0 2022-11-25 19:55:19 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-blng kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 19:55:19 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 19:55:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.0.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 19:55:23 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 19:55:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 19:55:49 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.0.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-serial-1-2/us-west1-b/bootstrap-e2e-minion-group-blng,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.0.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:55:23 +0000 UTC,LastTransitionTime:2022-11-25 19:55:22 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:55:23 +0000 UTC,LastTransitionTime:2022-11-25 19:55:22 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:55:23 +0000 UTC,LastTransitionTime:2022-11-25 19:55:22 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:55:23 +0000 UTC,LastTransitionTime:2022-11-25 19:55:22 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:55:23 +0000 UTC,LastTransitionTime:2022-11-25 19:55:22 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:55:23 +0000 UTC,LastTransitionTime:2022-11-25 19:55:22 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:55:23 +0000 UTC,LastTransitionTime:2022-11-25 19:55:22 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:25 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:55:49 +0000 UTC,LastTransitionTime:2022-11-25 19:55:19 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:55:49 +0000 UTC,LastTransitionTime:2022-11-25 19:55:19 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:55:49 +0000 UTC,LastTransitionTime:2022-11-25 19:55:19 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:55:49 +0000 UTC,LastTransitionTime:2022-11-25 19:55:20 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.5,},NodeAddress{Type:ExternalIP,Address:35.203.149.250,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-blng.c.k8s-jkns-e2e-gce-serial-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-blng.c.k8s-jkns-e2e-gce-serial-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:e70187ce3b5feec7f25489fd3f9573af,SystemUUID:e70187ce-3b5f-eec7-f254-89fd3f9573af,BootID:46b31dcb-9921-407c-93f6-58baec741ba2,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/metrics-server/metrics-server@sha256:6385aec64bb97040a5e692947107b81e178555c7a5b71caa90d733e4130efc10 registry.k8s.io/metrics-server/metrics-server:v0.5.2],SizeBytes:26023008,},ContainerImage{Names:[registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280 registry.k8s.io/sig-storage/snapshot-controller:v6.1.0],SizeBytes:22620891,},ContainerImage{Names:[registry.k8s.io/cpa/cluster-proportional-autoscaler@sha256:fd636b33485c7826fb20ef0688a83ee0910317dbb6c0c6f3ad14661c1db25def registry.k8s.io/cpa/cluster-proportional-autoscaler:1.8.4],SizeBytes:15209393,},ContainerImage{Names:[registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a registry.k8s.io/coredns/coredns:v1.9.3],SizeBytes:14837849,},ContainerImage{Names:[registry.k8s.io/autoscaling/addon-resizer@sha256:43f129b81d28f0fdd54de6d8e7eacd5728030782e03db16087fc241ad747d3d6 registry.k8s.io/autoscaling/addon-resizer:1.8.14],SizeBytes:10153852,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64@sha256:7eb7b3cee4d33c10c49893ad3c386232b86d4067de5251294d4c620d6e072b93 registry.k8s.io/networking/ingress-gce-404-server-with-metrics-amd64:v1.10.11],SizeBytes:6463068,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:58:28.877: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-blng Nov 25 19:58:29.078: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-blng Nov 25 19:58:29.297: INFO: kube-dns-autoscaler-5f6455f985-s5vgm started at 2022-11-25 19:55:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:29.297: INFO: Container autoscaler ready: true, restart count 0 Nov 25 19:58:29.297: INFO: metrics-server-v0.5.2-6764bf875c-fr576 started at 2022-11-25 19:55:25 +0000 UTC (0+2 container statuses recorded) Nov 25 19:58:29.297: INFO: Container metrics-server ready: false, restart count 3 Nov 25 19:58:29.297: INFO: Container metrics-server-nanny ready: true, restart count 4 Nov 25 19:58:29.297: INFO: konnectivity-agent-qv9km started at 2022-11-25 19:55:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:29.297: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 25 19:58:29.297: INFO: kube-proxy-bootstrap-e2e-minion-group-blng started at 2022-11-25 19:55:19 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:29.297: INFO: Container kube-proxy ready: true, restart count 1 Nov 25 19:58:29.297: INFO: metadata-proxy-v0.1-prrpq started at 2022-11-25 19:55:20 +0000 UTC (0+2 container statuses recorded) Nov 25 19:58:29.297: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:58:29.297: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:58:29.297: INFO: volume-snapshot-controller-0 started at 2022-11-25 19:55:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:29.297: INFO: Container volume-snapshot-controller ready: true, restart count 3 Nov 25 19:58:29.297: INFO: coredns-6d97d5ddb-g2nmr started at 2022-11-25 19:55:33 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:29.297: INFO: Container coredns ready: true, restart count 2 Nov 25 19:58:29.297: INFO: coredns-6d97d5ddb-j7sb8 started at 2022-11-25 19:55:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:29.297: INFO: Container coredns ready: true, restart count 0 Nov 25 19:58:29.297: INFO: l7-default-backend-8549d69d99-nvg98 started at 2022-11-25 19:55:25 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:29.297: INFO: Container default-http-backend ready: true, restart count 0 Nov 25 19:58:29.914: INFO: Latency metrics for node bootstrap-e2e-minion-group-blng Nov 25 19:58:29.914: INFO: Logging node info for node bootstrap-e2e-minion-group-lmt7 Nov 25 19:58:30.077: INFO: Node Info: &Node{ObjectMeta:{bootstrap-e2e-minion-group-lmt7 2ec4ad7e-2efd-432a-a9d7-0d36ca3030cc 664 0 2022-11-25 19:55:20 +0000 UTC <nil> <nil> map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/instance-type:n1-standard-2 beta.kubernetes.io/os:linux cloud.google.com/metadata-proxy-ready:true failure-domain.beta.kubernetes.io/region:us-west1 failure-domain.beta.kubernetes.io/zone:us-west1-b kubernetes.io/arch:amd64 kubernetes.io/hostname:bootstrap-e2e-minion-group-lmt7 kubernetes.io/os:linux node.kubernetes.io/instance-type:n1-standard-2 topology.kubernetes.io/region:us-west1 topology.kubernetes.io/zone:us-west1-b] map[node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] [] [{kubelet Update v1 2022-11-25 19:55:20 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/instance-type":{},"f:beta.kubernetes.io/os":{},"f:cloud.google.com/metadata-proxy-ready":{},"f:failure-domain.beta.kubernetes.io/region":{},"f:failure-domain.beta.kubernetes.io/zone":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{},"f:node.kubernetes.io/instance-type":{},"f:topology.kubernetes.io/region":{},"f:topology.kubernetes.io/zone":{}}},"f:spec":{"f:providerID":{}}} } {kube-controller-manager Update v1 2022-11-25 19:55:21 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.64.2.0/24\"":{}}}} } {node-problem-detector Update v1 2022-11-25 19:55:25 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"CorruptDockerOverlay2\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentContainerdRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentDockerRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentKubeletRestart\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"FrequentUnregisterNetDevice\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"KernelDeadlock\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}},"k:{\"type\":\"ReadonlyFilesystem\"}":{".":{},"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{},"f:type":{}}}}} status} {kube-controller-manager Update v1 2022-11-25 19:55:40 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"NetworkUnavailable\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}}}} status} {kubelet Update v1 2022-11-25 19:55:51 +0000 UTC FieldsV1 {"f:status":{"f:conditions":{"k:{\"type\":\"DiskPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"MemoryPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"PIDPressure\"}":{"f:lastHeartbeatTime":{}},"k:{\"type\":\"Ready\"}":{"f:lastHeartbeatTime":{},"f:lastTransitionTime":{},"f:message":{},"f:reason":{},"f:status":{}}},"f:images":{}}} status}]},Spec:NodeSpec{PodCIDR:10.64.2.0/24,DoNotUseExternalID:,ProviderID:gce://k8s-jkns-e2e-gce-serial-1-2/us-west1-b/bootstrap-e2e-minion-group-lmt7,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[10.64.2.0/24],},Status:NodeStatus{Capacity:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{101203873792 0} {<nil>} 98831908Ki BinarySI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7815438336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Allocatable:ResourceList{attachable-volumes-gce-pd: {{127 0} {<nil>} 127 DecimalSI},cpu: {{2 0} {<nil>} 2 DecimalSI},ephemeral-storage: {{91083486262 0} {<nil>} 91083486262 DecimalSI},hugepages-1Gi: {{0 0} {<nil>} 0 DecimalSI},hugepages-2Mi: {{0 0} {<nil>} 0 DecimalSI},memory: {{7553294336 0} {<nil>} BinarySI},pods: {{110 0} {<nil>} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:FrequentKubeletRestart,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:NoFrequentKubeletRestart,Message:kubelet is functioning properly,},NodeCondition{Type:FrequentDockerRestart,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:NoFrequentDockerRestart,Message:docker is functioning properly,},NodeCondition{Type:FrequentContainerdRestart,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:NoFrequentContainerdRestart,Message:containerd is functioning properly,},NodeCondition{Type:KernelDeadlock,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:KernelHasNoDeadlock,Message:kernel has no deadlock,},NodeCondition{Type:ReadonlyFilesystem,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:FilesystemIsNotReadOnly,Message:Filesystem is not read-only,},NodeCondition{Type:CorruptDockerOverlay2,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:NoCorruptDockerOverlay2,Message:docker overlay2 is functioning properly,},NodeCondition{Type:FrequentUnregisterNetDevice,Status:False,LastHeartbeatTime:2022-11-25 19:55:25 +0000 UTC,LastTransitionTime:2022-11-25 19:55:24 +0000 UTC,Reason:NoFrequentUnregisterNetDevice,Message:node is functioning properly,},NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2022-11-25 19:55:40 +0000 UTC,LastTransitionTime:2022-11-25 19:55:40 +0000 UTC,Reason:RouteCreated,Message:RouteController created a route,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2022-11-25 19:55:51 +0000 UTC,LastTransitionTime:2022-11-25 19:55:20 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2022-11-25 19:55:51 +0000 UTC,LastTransitionTime:2022-11-25 19:55:20 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2022-11-25 19:55:51 +0000 UTC,LastTransitionTime:2022-11-25 19:55:20 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2022-11-25 19:55:51 +0000 UTC,LastTransitionTime:2022-11-25 19:55:21 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.138.0.3,},NodeAddress{Type:ExternalIP,Address:34.145.65.21,},NodeAddress{Type:InternalDNS,Address:bootstrap-e2e-minion-group-lmt7.c.k8s-jkns-e2e-gce-serial-1-2.internal,},NodeAddress{Type:Hostname,Address:bootstrap-e2e-minion-group-lmt7.c.k8s-jkns-e2e-gce-serial-1-2.internal,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ccebcc399520c40edcaf4d966093a26d,SystemUUID:ccebcc39-9520-c40e-dcaf-4d966093a26d,BootID:9c47d05b-3472-437e-8398-7a2f5ac36d56,KernelVersion:5.10.123+,OSImage:Container-Optimized OS from Google,ContainerRuntimeVersion:containerd://1.7.0-beta.0-149-gd06318622,KubeletVersion:v1.27.0-alpha.0.48+6bdda2da160043,KubeProxyVersion:v1.27.0-alpha.0.48+6bdda2da160043,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[registry.k8s.io/kube-proxy-amd64:v1.27.0-alpha.0.48_6bdda2da160043],SizeBytes:67201224,},ContainerImage{Names:[gke.gcr.io/prometheus-to-sd@sha256:e739643c3939ba0b161425f45a1989eedfc4a3b166db9a7100863296b4c70510 gke.gcr.io/prometheus-to-sd:v0.11.1-gke.1],SizeBytes:48742566,},ContainerImage{Names:[registry.k8s.io/kas-network-proxy/proxy-agent@sha256:48f2a4ec3e10553a81b8dd1c6fa5fe4bcc9617f78e71c1ca89c6921335e2d7da registry.k8s.io/kas-network-proxy/proxy-agent:v0.0.33],SizeBytes:8512162,},ContainerImage{Names:[registry.k8s.io/metadata-proxy@sha256:e914645f22e946bce5165737e1b244e0a296ad1f0f81a9531adc57af2780978a registry.k8s.io/metadata-proxy:v0.1.12],SizeBytes:5301657,},ContainerImage{Names:[registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d registry.k8s.io/pause:3.8],SizeBytes:311286,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},} Nov 25 19:58:30.077: INFO: Logging kubelet events for node bootstrap-e2e-minion-group-lmt7 Nov 25 19:58:30.277: INFO: Logging pods the kubelet thinks is on node bootstrap-e2e-minion-group-lmt7 Nov 25 19:58:30.481: INFO: kube-proxy-bootstrap-e2e-minion-group-lmt7 started at 2022-11-25 19:55:21 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:30.481: INFO: Container kube-proxy ready: false, restart count 2 Nov 25 19:58:30.481: INFO: metadata-proxy-v0.1-cdhvh started at 2022-11-25 19:55:21 +0000 UTC (0+2 container statuses recorded) Nov 25 19:58:30.481: INFO: Container metadata-proxy ready: true, restart count 0 Nov 25 19:58:30.481: INFO: Container prometheus-to-sd-exporter ready: true, restart count 0 Nov 25 19:58:30.481: INFO: konnectivity-agent-vkqgg started at 2022-11-25 19:55:40 +0000 UTC (0+1 container statuses recorded) Nov 25 19:58:30.481: INFO: Container konnectivity-agent ready: true, restart count 0 Nov 25 19:58:31.100: INFO: Latency metrics for node bootstrap-e2e-minion-group-lmt7 Nov 25 19:58:31.289: INFO: Running kubectl logs on non-ready containers in kube-system Nov 25 19:58:31.479: INFO: Logs of kube-system/kube-apiserver-bootstrap-e2e-master:kube-apiserver on node bootstrap-e2e-master Nov 25 19:58:31.479: INFO: : STARTLOG 2022/11/25 19:58:08 Running command: Command env: (log-file=/var/log/kube-apiserver.log, also-stdout=false, redirect-stderr=true) Run from directory: Executable path: /usr/local/bin/kube-apiserver Args (comma-delimited): /usr/local/bin/kube-apiserver,--allow-privileged=true,--v=4,--runtime-config=extensions/v1beta1,scheduling.k8s.io/v1alpha1,--delete-collection-workers=1,--cloud-config=/etc/gce.conf,--allow-privileged=true,--cloud-provider=gce,--client-ca-file=/etc/srv/kubernetes/pki/ca-certificates.crt,--etcd-servers=https://127.0.0.1:2379,--etcd-cafile=/etc/srv/kubernetes/pki/etcd-apiserver-ca.crt,--etcd-certfile=/etc/srv/kubernetes/pki/etcd-apiserver-client.crt,--etcd-keyfile=/etc/srv/kubernetes/pki/etcd-apiserver-client.key,--etcd-servers-overrides=/events#http://127.0.0.1:4002,--storage-backend=etcd3,--secure-port=443,--tls-cert-file=/etc/srv/kubernetes/pki/apiserver.crt,--tls-private-key-file=/etc/srv/kubernetes/pki/apiserver.key,--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname,--requestheader-client-ca-file=/etc/srv/kubernetes/pki/aggr_ca.crt,--requestheader-allowed-names=aggregator,--requestheader-extra-headers-prefix=X-Remote-Extra-,--requestheader-group-headers=X-Remote-Group,--requestheader-username-headers=X-Remote-User,--proxy-client-cert-file=/etc/srv/kubernetes/pki/proxy_client.crt,--proxy-client-key-file=/etc/srv/kubernetes/pki/proxy_client.key,--enable-aggregator-routing=true,--kubelet-client-certificate=/etc/srv/kubernetes/pki/apiserver-client.crt,--kubelet-client-key=/etc/srv/kubernetes/pki/apiserver-client.key,--service-account-key-file=/etc/srv/kubernetes/pki/serviceaccount.crt,--token-auth-file=/etc/srv/kubernetes/known_tokens.csv,--service-cluster-ip-range=10.0.0.0/16,--service-account-issuer=https://kubernetes.default.svc.cluster.local,--api-audiences=https://kubernetes.default.svc.cluster.local,--service-account-signing-key-file=/etc/srv/kubernetes/pki/serviceaccount.key,--audit-policy-file=/etc/audit_policy.config,--audit-log-path=/var/log/kube-apiserver-audit.log,--audit-log-maxage=0,--audit-log-maxbackup=0,--audit-log-maxsize=2000000000,--audit-log-mode=batch,--audit-log-truncate-enabled=true,--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,Priority,StorageObjectInUseProtection,PersistentVolumeClaimResize,RuntimeClass,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,--admission-control-config-file=/etc/srv/kubernetes/admission_controller_config.yaml,--min-request-timeout=300,--advertise-address=34.127.41.66,--authorization-mode=Node,RBAC,--egress-selector-config-file=/etc/srv/kubernetes/egress_selector_configuration.yaml 2022/11/25 19:58:08 Now listening for interrupts ENDLOG for container kube-system:kube-apiserver-bootstrap-e2e-master:kube-apiserver Nov 25 19:58:31.680: INFO: Logs of kube-system/kube-proxy-bootstrap-e2e-minion-group-lmt7:kube-proxy on node bootstrap-e2e-minion-group-lmt7 Nov 25 19:58:31.680: INFO: : STARTLOG ENDLOG for container kube-system:kube-proxy-bootstrap-e2e-minion-group-lmt7:kube-proxy Nov 25 19:58:31.881: INFO: Logs of kube-system/metrics-server-v0.5.2-6764bf875c-fr576:metrics-server on node bootstrap-e2e-minion-group-blng Nov 25 19:58:31.881: INFO: : STARTLOG Error: unable to load configmap based request-header-client-ca-file: Get "https://10.0.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": stream error: stream ID 1; INTERNAL_ERROR Usage: [flags] Metrics server flags: --kubeconfig string The path to the kubeconfig used to connect to the Kubernetes API server and the Kubelets (defaults to in-cluster config) --metric-resolution duration The resolution at which metrics-server will retain metrics, must set value at least 10s. (default 1m0s) --version Show version Kubelet client flags: --deprecated-kubelet-completely-insecure DEPRECATED: Do not use any encryption, authorization, or authentication when communicating with the Kubelet. This is rarely the right option, since it leaves kubelet communication completely insecure. If you encounter auth errors, make sure you've enabled token webhook auth on the Kubelet, and if you're in a test cluster with self-signed Kubelet certificates, consider using kubelet-insecure-tls instead. --kubelet-certificate-authority string Path to the CA to use to validate the Kubelet's serving certificates. --kubelet-client-certificate string Path to a client cert file for TLS. --kubelet-client-key string Path to a client key file for TLS. --kubelet-insecure-tls Do not verify CA of serving certificates presented by Kubelets. For testing purposes only. --kubelet-port int The port to use to connect to Kubelets. (default 10250) --kubelet-preferred-address-types strings The priority of node address types to use when determining which address to use to connect to a particular node (default [Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP]) --kubelet-use-node-status-port Use the port in the node status. Takes precedence over --kubelet-port flag. Apiserver secure serving flags: --bind-address ip The IP address on which to listen for the --secure-port port. The associated interface(s) must be reachable by the rest of the cluster, and by CLI/web clients. If blank or an unspecified address (0.0.0.0 or ::), all interfaces will be used. (default 0.0.0.0) --cert-dir string The directory where the TLS certs are located. If --tls-cert-file and --tls-private-key-file are provided, this flag will be ignored. (default "apiserver.local.config/certificates") --http2-max-streams-per-connection int The limit that the server gives to clients for the maximum number of streams in an HTTP/2 connection. Zero means to use golang's default. --permit-address-sharing If true, SO_REUSEADDR will be used when binding the port. This allows binding to wildcard IPs like 0.0.0.0 and specific IPs in parallel, and it avoids waiting for the kernel to release sockets in TIME_WAIT state. [default=false] --permit-port-sharing If true, SO_REUSEPORT will be used when binding the port, which allows more than one instance to bind on the same address and port. [default=false] --secure-port int The port on which to serve HTTPS with authentication and authorization. If 0, don't serve HTTPS at all. (default 443) --tls-cert-file string File containing the default x509 Certificate for HTTPS. (CA cert, if any, concatenated after server cert). If HTTPS serving is enabled, and --tls-cert-file and --tls-private-key-file are not provided, a self-signed certificate and key are generated for the public address and saved to the directory specified by --cert-dir. --tls-cipher-suites strings Comma-separated list of cipher suites for the server. If omitted, the default Go cipher suites will be used. Preferred values: TLS_AES_128_GCM_SHA256, TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256, TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305, TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256, TLS_RSA_WITH_3DES_EDE_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_128_GCM_SHA256, TLS_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_256_GCM_SHA384. Insecure values: TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_RC4_128_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_RC4_128_SHA, TLS_RSA_WITH_AES_128_CBC_SHA256, TLS_RSA_WITH_RC4_128_SHA. --tls-min-version string Minimum TLS version supported. Possible values: VersionTLS10, VersionTLS11, VersionTLS12, VersionTLS13 --tls-private-key-file string File containing the default x509 private key matching --tls-cert-file. --tls-sni-cert-key namedCertKey A pair of x509 certificate and private key file paths, optionally suffixed with a list of domain patterns which are fully qualified domain names, possibly with prefixed wildcard segments. The domain patterns also allow IP addresses, but IPs should only be used if the apiserver has visibility to the IP address requested by a client. If no domain patterns are provided, the names of the certificate are extracted. Non-wildcard matches trump over wildcard matches, explicit domain patterns trump over extracted names. For multiple key/certificate pairs, use the --tls-sni-cert-key multiple times. Examples: "example.crt,example.key" or "foo.crt,foo.key:*.foo.com,foo.com". (default []) Apiserver authentication flags: --authentication-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create tokenreviews.authentication.k8s.io. --authentication-skip-lookup If false, the authentication-kubeconfig will be used to lookup missing authentication configuration from the cluster. --authentication-token-webhook-cache-ttl duration The duration to cache responses from the webhook token authenticator. (default 10s) --authentication-tolerate-lookup-failure If true, failures to look up missing authentication configuration from the cluster are not considered fatal. Note that this can result in authentication that treats all requests as anonymous. --client-ca-file string If set, any request presenting a client certificate signed by one of the authorities in the client-ca-file is authenticated with an identity corresponding to the CommonName of the client certificate. --requestheader-allowed-names strings List of client certificate common names to allow to provide usernames in headers specified by --requestheader-username-headers. If empty, any client certificate validated by the authorities in --requestheader-client-ca-file is allowed. --requestheader-client-ca-file string Root certificate bundle to use to verify client certificates on incoming requests before trusting usernames in headers specified by --requestheader-username-headers. WARNING: generally do not depend on authorization being already done for incoming requests. --requestheader-extra-headers-prefix strings List of request header prefixes to inspect. X-Remote-Extra- is suggested. (default [x-remote-extra-]) --requestheader-group-headers strings List of request headers to inspect for groups. X-Remote-Group is suggested. (default [x-remote-group]) --requestheader-username-headers strings List of request headers to inspect for usernames. X-Remote-User is common. (default [x-remote-user]) Apiserver authorization flags: --authorization-always-allow-paths strings A list of HTTP paths to skip during authorization, i.e. these are authorized without contacting the 'core' kubernetes server. (default [/healthz,/readyz,/livez]) --authorization-kubeconfig string kubeconfig file pointing at the 'core' kubernetes server with enough rights to create subjectaccessreviews.authorization.k8s.io. --authorization-webhook-cache-authorized-ttl duration The duration to cache 'authorized' responses from the webhook authorizer. (default 10s) --authorization-webhook-cache-unauthorized-ttl duration The duration to cache 'unauthorized' responses from the webhook authorizer. (default 10s) Features flags: --contention-profiling Enable lock contention profiling, if profiling is enabled --profiling Enable profiling via web interface host:port/debug/pprof/ (default true) Logging flags: --add_dir_header If true, adds the file directory to the header of the log messages --alsologtostderr log to standard error as well as files --log_backtrace_at traceLocation when logging hits line file:N, emit a stack trace (default :0) --log_dir string If non-empty, write log files in this directory --log_file string If non-empty, use this log file --log_file_max_size uint Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800) --logtostderr log to standard error instead of files (default true) --one_output If true, only write logs to their native severity level (vs also writing to each lower severity level) --skip_headers If true, avoid header prefixes in the log messages --skip_log_headers If true, avoid headers when opening log files --stderrthreshold severity logs at or above this threshold go to stderr (default 2) -v, --v Level number for the log level verbosity --vmodule moduleSpec comma-separated list of pattern=N settings for file-filtered logging panic: unable to load configmap based request-header-client-ca-file: Get "https://10.0.0.1:443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": stream error: stream ID 1; INTERNAL_ERROR goroutine 1 [running]: main.main() /go/src/sigs.k8s.io/metrics-server/cmd/metrics-server/metrics-server.go:39 +0x105 ENDLOG for container kube-system:metrics-server-v0.5.2-6764bf875c-fr576:metrics-server Nov 25 19:58:32.081: INFO: Logs of kube-system/metrics-server-v0.5.2-6764bf875c-fr576:metrics-server-nanny on node bootstrap-e2e-minion-group-blng Nov 25 19:58:32.081: INFO: : STARTLOG ERROR: logging before flag.Parse: I1125 19:58:03.391765 1 pod_nanny.go:68] Invoked by [/pod_nanny --config-dir=/etc/config --cpu=40m --extra-cpu=0.5m --memory=40Mi --extra-memory=4Mi --threshold=5 --deployment=metrics-server-v0.5.2 --container=metrics-server --poll-period=30000 --estimator=exponential --minClusterSize=16 --use-metrics=true] ERROR: logging before flag.Parse: I1125 19:58:03.391827 1 pod_nanny.go:69] Version: 1.8.14 ERROR: logging before flag.Parse: I1125 19:58:03.391846 1 pod_nanny.go:85] Watching namespace: kube-system, pod: metrics-server-v0.5.2-6764bf875c-fr576, container: metrics-server. ERROR: logging before flag.Parse: I1125 19:58:03.391883 1 pod_nanny.go:86] storage: MISSING, extra_storage: 0Gi ERROR: logging before flag.Parse: I1125 19:58:03.483078 1 pod_nanny.go:116] cpu: 40m, extra_cpu: 0.5m, memory: 40Mi, extra_memory: 4Mi ERROR: logging before flag.Parse: I1125 19:58:03.483111 1 pod_nanny.go:145] Resources: [{Base:{i:{value:40 scale:-3} d:{Dec:<nil>} s:40m Format:DecimalSI} ExtraPerNode:{i:{value:5 scale:-4} d:{Dec:<nil>} s: Format:DecimalSI} Name:cpu} {Base:{i:{value:41943040 scale:0} d:{Dec:<nil>} s: Format:BinarySI} ExtraPerNode:{i:{value:4194304 scale:0} d:{Dec:<nil>} s:4Mi Format:BinarySI} Name:memory}] ERROR: logging before flag.Parse: E1125 19:58:03.484184 1 nanny_lib.go:128] Get "https://10.0.0.1:443/metrics": dial tcp 10.0.0.1:443: connect: connection refused ENDLOG for container kube-system:metrics-server-v0.5.2-6764bf875c-fr576:metrics-server-nanny Nov 25 19:58:32.081: FAIL: Error waiting for all pods to be running and ready: 0 / 0 pods in namespace kube-system are NOT in RUNNING and READY state in 10m0s POD NODE PHASE GRACE CONDITIONS Last error: Get "https://34.127.41.66/api/v1/namespaces/kube-system/replicationcontrollers": net/http: TLS handshake timeout - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=3, ErrCode=NO_ERROR, debug="" Full Stack Trace k8s.io/kubernetes/test/e2e.setupSuite() test/e2e/e2e.go:249 +0x4de k8s.io/kubernetes/test/e2e.glob..func1() test/e2e/e2e.go:81 +0x8f reflect.Value.call({0x66a9bc0?, 0x78952d0?, 0x13?}, {0x75b6e72, 0x4}, {0xc0000c8f20, 0x0, 0x0?}) /usr/local/go/src/reflect/value.go:584 +0x8c5 reflect.Value.Call({0x66a9bc0?, 0x78952d0?, 0x26cd5ed?}, {0xc0004b6f20?, 0x265bb67?, 0xc0004b6f20?}) /usr/local/go/src/reflect/value.go:368 +0xbc
Filter through log files | View test history on testgrid
error during ./hack/ginkgo-e2e.sh --ginkgo.focus=\[Slow\] --ginkgo.skip=\[Driver:.gcepd\]|\[Serial\]|\[Disruptive\]|\[Flaky\]|\[Feature:.+\] --minStartupPods=8 --report-dir=/workspace/_artifacts --disable-log-dump=true: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
error during ./cluster/kubectl.sh --match-server-version=false get nodes -oyaml: exit status 1
from junit_runner.xml
Filter through log files | View test history on testgrid
Kubernetes e2e suite [ReportAfterSuite] Kubernetes e2e suite report
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
Kubernetes e2e suite [SynchronizedAfterSuite]
kubetest Check APIReachability
kubetest Deferred TearDown
kubetest DumpClusterLogs
kubetest Extract
kubetest GetDeployer
kubetest IsUp
kubetest Prepare
kubetest TearDown
kubetest TearDown Previous
kubetest Timeout
kubetest Up
kubetest diffResources
kubetest kubectl version
kubetest listResources After
kubetest listResources Before
kubetest listResources Down
kubetest listResources Up
kubetest test setup