Skip to content

Add batch limitation for Kinesis Firehose #29123

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
May 2, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
---
title: Log Collection Troubleshooting Guide

aliases:
- /logs/faq/log-collection-troubleshooting-guide
further_reading:
Expand Down Expand Up @@ -223,6 +222,11 @@ When collecting logs from Journald, make sure that the Datadog Agent user is add

**Note**: Journald sends an empty payload if the file permissions are incorrect. Accordingly, it is not possible to raise or send an explicit error message in this case.

## Batch limitation in Kinesis Firehose

Datadog has an intake limit of 65,536 events per batch and recommends setting the Kinesis buffer size to 2 MiB. If your system exceeds this limit, some logs may be dropped. To reduce the number of events per batch, consider lowering the buffer size.


## Configuration issues

These are a few of the common configuration issues that are worth triple-checking in your `datadog-agent` setup:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,8 @@ AWS fully manages Amazon Data Firehose, so you don't need to maintain any additi
{{< tabs >}}
{{% tab "Amazon Data Firehose Delivery stream" %}}

<div class="alert alert-info">Datadog has an intake limit of <strong>65,536 events per batch</strong> and recommends setting the Kinesis buffer size to <strong>2 MiB</strong>. If your system exceeds this limit, some logs may be dropped.</div>

Datadog recommends using a Kinesis Data Stream as input when using the Datadog destination with Amazon Data Firehose. It gives you the ability to forward your logs to multiple destinations, in case Datadog is not the only consumer for those logs. If Datadog is the only destination for your logs, or if you already have a Kinesis Data Stream with your logs, you can ignore step one.

1. Optionally, use the [Create a Data Stream][1] section of the Amazon Kinesis Data Streams developer guide in AWS to create a new Kinesis data stream. Name the stream something descriptive, like `DatadogLogStream`.
Expand All @@ -42,7 +44,7 @@ Datadog recommends using a Kinesis Data Stream as input when using the Datadog d
1. In the **Destination settings**, choose the `Datadog logs` HTTP endpoint URL that corresponds to your [Datadog site][5].
1. Paste your API key into the **API key** field. You can get or create an API key from the [Datadog API Keys page][3]. If you prefer to use 1ecrets Manager authentication, add in your Datadog API key in the full JSON format in the value field as follows: `{"api_key":"<YOUR_API_KEY>"}`.
1. Optionally, configure the **Retry duration**, the buffer settings, or add **Parameters**, which are attached as tags to your logs.
**Note**: Datadog recommends setting the **Buffer size** to `2 MiB` if the logs are single line messages.
**Note**: Datadog has an intake limit of 65,536 events per batch and recommends setting the **Buffer size** to `2 MiB` if the logs are single line messages.
1. In the **Backup settings**, select an S3 backup bucket to receive any failed events that exceed the retry duration.
**Note**: To ensure any logs that fail through the delivery stream are still sent to Datadog, set the Datadog Forwarder Lambda function to [forward logs][4] from this S3 bucket.
1. Click **Create Firehose stream**.
Expand Down
Loading