diff --git a/content/en/logs/guide/log-collection-troubleshooting-guide.md b/content/en/logs/guide/log-collection-troubleshooting-guide.md index cba0ac10161d4..8f40bcaa06ee0 100644 --- a/content/en/logs/guide/log-collection-troubleshooting-guide.md +++ b/content/en/logs/guide/log-collection-troubleshooting-guide.md @@ -1,6 +1,5 @@ --- title: Log Collection Troubleshooting Guide - aliases: - /logs/faq/log-collection-troubleshooting-guide further_reading: @@ -223,6 +222,11 @@ When collecting logs from Journald, make sure that the Datadog Agent user is add **Note**: Journald sends an empty payload if the file permissions are incorrect. Accordingly, it is not possible to raise or send an explicit error message in this case. +## Batch limitation in Kinesis Firehose + +Datadog has an intake limit of 65,536 events per batch and recommends setting the Kinesis buffer size to 2 MiB. If your system exceeds this limit, some logs may be dropped. To reduce the number of events per batch, consider lowering the buffer size. + + ## Configuration issues These are a few of the common configuration issues that are worth triple-checking in your `datadog-agent` setup: diff --git a/content/en/logs/guide/send-aws-services-logs-with-the-datadog-kinesis-firehose-destination.md b/content/en/logs/guide/send-aws-services-logs-with-the-datadog-kinesis-firehose-destination.md index f22606d89f01f..77c6ab3e16623 100644 --- a/content/en/logs/guide/send-aws-services-logs-with-the-datadog-kinesis-firehose-destination.md +++ b/content/en/logs/guide/send-aws-services-logs-with-the-datadog-kinesis-firehose-destination.md @@ -29,6 +29,8 @@ AWS fully manages Amazon Data Firehose, so you don't need to maintain any additi {{< tabs >}} {{% tab "Amazon Data Firehose Delivery stream" %}} +
Datadog has an intake limit of 65,536 events per batch and recommends setting the Kinesis buffer size to 2 MiB. If your system exceeds this limit, some logs may be dropped.
+ Datadog recommends using a Kinesis Data Stream as input when using the Datadog destination with Amazon Data Firehose. It gives you the ability to forward your logs to multiple destinations, in case Datadog is not the only consumer for those logs. If Datadog is the only destination for your logs, or if you already have a Kinesis Data Stream with your logs, you can ignore step one. 1. Optionally, use the [Create a Data Stream][1] section of the Amazon Kinesis Data Streams developer guide in AWS to create a new Kinesis data stream. Name the stream something descriptive, like `DatadogLogStream`. @@ -42,7 +44,7 @@ Datadog recommends using a Kinesis Data Stream as input when using the Datadog d 1. In the **Destination settings**, choose the `Datadog logs` HTTP endpoint URL that corresponds to your [Datadog site][5]. 1. Paste your API key into the **API key** field. You can get or create an API key from the [Datadog API Keys page][3]. If you prefer to use 1ecrets Manager authentication, add in your Datadog API key in the full JSON format in the value field as follows: `{"api_key":""}`. 1. Optionally, configure the **Retry duration**, the buffer settings, or add **Parameters**, which are attached as tags to your logs. - **Note**: Datadog recommends setting the **Buffer size** to `2 MiB` if the logs are single line messages. + **Note**: Datadog has an intake limit of 65,536 events per batch and recommends setting the **Buffer size** to `2 MiB` if the logs are single line messages. 1. In the **Backup settings**, select an S3 backup bucket to receive any failed events that exceed the retry duration. **Note**: To ensure any logs that fail through the delivery stream are still sent to Datadog, set the Datadog Forwarder Lambda function to [forward logs][4] from this S3 bucket. 1. Click **Create Firehose stream**.