Writes a single data record into an Firehose stream. To write multiple data records into a Firehose stream, use PutRecordBatch . Applications using these operations are referred to as producers.
By default, each Firehose stream can take in up to 2,000 transactions per second, 5,000 records per second, or 5 MB per second. If you use PutRecord and PutRecordBatch , the limits are an aggregate across these two operations for each Firehose stream. For more information about limits and how to request an increase, see Amazon Firehose Limits .
Firehose accumulates and publishes a particular metric for a customer account in one minute intervals. It is possible that the bursts of incoming bytes/records ingested to a Firehose stream last only for a few seconds. Due to this, the actual spikes in the traffic might not be fully visible in the customer’s 1 minute CloudWatch metrics.
You must specify the name of the Firehose stream and the data record when using PutRecord . The data record consists of a data blob that can be up to 1,000 KiB in size, and any kind of data. For example, it can be a segment from a log file, geographic location data, website clickstream data, and so on.
For multi record de-aggregation, you can not put more than 500 records even if the data blob length is less than 1000 KiB. If you include more than 500 records, the request succeeds but the record de-aggregation doesn’t work as expected and transformation lambda is invoked with the complete base64 encoded data blob instead of de-aggregated base64 decoded records.
Firehose buffers records before delivering them to the destination. To disambiguate the data blobs at the destination, a common solution is to use delimiters in the data, such as a newline (\n
) or some other character unique within the data. This allows the consumer application to parse individual data items when reading the data from the destination.
The PutRecord
operation returns a RecordId
, which is a unique string assigned to each record. Producer applications can use this ID for purposes such as auditability and investigation.
If the PutRecord
operation throws a ServiceUnavailableException
, the API is automatically reinvoked (retried) 3 times. If the exception persists, it is possible that the throughput limits have been exceeded for the Firehose stream.
Re-invoking the Put API operations (for example, PutRecord and PutRecordBatch) can result in data duplicates. For larger data assets, allow for a longer time out before retrying Put API operations.
Data records sent to Firehose are stored for 24 hours from the time they are added to a Firehose stream as it tries to send the records to the destination. If the destination is unreachable for more than 24 hours, the data is no longer available.
See also: AWS API Documentation
put-record
--delivery-stream-name <value>
--record <value>
[--cli-input-json | --cli-input-yaml]
[--generate-cli-skeleton <value>]
[--debug]
[--endpoint-url <value>]
[--no-verify-ssl]
[--no-paginate]
[--output <value>]
[--query <value>]
[--profile <value>]
[--region <value>]
[--version <value>]
[--color <value>]
[--no-sign-request]
[--ca-bundle <value>]
[--cli-read-timeout <value>]
[--cli-connect-timeout <value>]
[--cli-binary-format <value>]
[--no-cli-pager]
[--cli-auto-prompt]
[--no-cli-auto-prompt]
--delivery-stream-name
(string)
The name of the Firehose stream.
--record
(structure)
The record.
Data -> (blob)
The data blob, which is base64-encoded when the blob is serialized. The maximum size of the data blob, before base64-encoding, is 1,000 KiB.
Shorthand Syntax:
Data=blob
JSON Syntax:
{
"Data": blob
}
--cli-input-json
| --cli-input-yaml
(string)
Reads arguments from the JSON string provided. The JSON string follows the format provided by --generate-cli-skeleton
. If other arguments are provided on the command line, those values will override the JSON-provided values. It is not possible to pass arbitrary binary values using a JSON-provided value as the string will be taken literally. This may not be specified along with --cli-input-yaml
.
--generate-cli-skeleton
(string)
Prints a JSON skeleton to standard output without sending an API request. If provided with no value or the value input
, prints a sample input JSON that can be used as an argument for --cli-input-json
. Similarly, if provided yaml-input
it will print a sample input YAML that can be used with --cli-input-yaml
. If provided with the value output
, it validates the command inputs and returns a sample output JSON for that command. The generated JSON skeleton is not stable between versions of the AWS CLI and there are no backwards compatibility guarantees in the JSON skeleton generated.
--debug
(boolean)
Turn on debug logging.
--endpoint-url
(string)
Override command’s default URL with the given URL.
--no-verify-ssl
(boolean)
By default, the AWS CLI uses SSL when communicating with AWS services. For each SSL connection, the AWS CLI will verify SSL certificates. This option overrides the default behavior of verifying SSL certificates.
--no-paginate
(boolean)
Disable automatic pagination. If automatic pagination is disabled, the AWS CLI will only make one call, for the first page of results.
--output
(string)
The formatting style for command output.
--query
(string)
A JMESPath query to use in filtering the response data.
--profile
(string)
Use a specific profile from your credential file.
--region
(string)
The region to use. Overrides config/env settings.
--version
(string)
Display the version of this tool.
--color
(string)
Turn on/off color output.
--no-sign-request
(boolean)
Do not sign requests. Credentials will not be loaded if this argument is provided.
--ca-bundle
(string)
The CA certificate bundle to use when verifying SSL certificates. Overrides config/env settings.
--cli-read-timeout
(int)
The maximum socket read time in seconds. If the value is set to 0, the socket read will be blocking and not timeout. The default value is 60 seconds.
--cli-connect-timeout
(int)
The maximum socket connect time in seconds. If the value is set to 0, the socket connect will be blocking and not timeout. The default value is 60 seconds.
--cli-binary-format
(string)
The formatting style to be used for binary blobs. The default format is base64. The base64 format expects binary blobs to be provided as a base64 encoded string. The raw-in-base64-out format preserves compatibility with AWS CLI V1 behavior and binary values must be passed literally. When providing contents from a file that map to a binary blob fileb://
will always be treated as binary and use the file contents directly regardless of the cli-binary-format
setting. When using file://
the file contents will need to properly formatted for the configured cli-binary-format
.
--no-cli-pager
(boolean)
Disable cli pager for output.
--cli-auto-prompt
(boolean)
Automatically prompt for CLI input parameters.
--no-cli-auto-prompt
(boolean)
Disable automatically prompt for CLI input parameters.
To use the following examples, you must have the AWS CLI installed and configured. See the Getting started guide in the AWS CLI User Guide for more information.
Unless otherwise stated, all examples have unix-like quotation rules. These examples will need to be adapted to your terminal’s quoting rules. See Using quotation marks with strings in the AWS CLI User Guide .
To write a record to a stream
The following put-record
example writes data to a stream. The data is encoded in Base64 format.
aws firehose put-record \
--delivery-stream-name my-stream \
--record '{"Data":"SGVsbG8gd29ybGQ="}'
Output:
{
"RecordId": "RjB5K/nnoGFHqwTsZlNd/TTqvjE8V5dsyXZTQn2JXrdpMTOwssyEb6nfC8fwf1whhwnItt4mvrn+gsqeK5jB7QjuLg283+Ps4Sz/j1Xujv31iDhnPdaLw4BOyM9Amv7PcCuB2079RuM0NhoakbyUymlwY8yt20G8X2420wu1jlFafhci4erAt7QhDEvpwuK8N1uOQ1EuaKZWxQHDzcG6tk1E49IPeD9k",
"Encrypted": false
}
For more information, see Sending Data to an Amazon Kinesis Data Firehose Delivery Stream in the Amazon Kinesis Data Firehose Developer Guide.
RecordId -> (string)
The ID of the record.
Encrypted -> (boolean)
Indicates whether server-side encryption (SSE) was enabled during this operation.