Facing issue with Logs ingestion from fluent-bit to Elasticsearch #10205
Unanswered
mohijeet-changejar
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi everyone, we are using fluent-bit and Elastic search for our logging mechanism. We are getting somewhere around 200k-300k logs per second which is being handled by elastic-search multi node cluster having total of 5 nodes.
I open discussion regarding 429 error in elastic search when fluent-bit sends logs, from discussion it seems the way fluent-bit sending logs is the issue previously i was using FLUSH interval as 1 sec which is causing elastic search to overwhelm, I changed it to 2 sec which is not significant but i saw queue of thread_pool in elastic search to reduce significantly...
current configuration is fine for elastic search but it started causing issue in fluent-bit such as frequent pauses of tail plugin, delayed logs, memory usage also increased of pods. So is there anything i can do? for me increasing Memory buffer is not an option its already very high (Mem_Buf_Limit - 1024MB).
Are there any way which can increase data size being sent by fluent-bit at once via each bulk requests? and what is by default data being sent in each bulk request? Any other recommendations are also welcomed
Beta Was this translation helpful? Give feedback.
All reactions