We recommend new projects start with resources from the AWS provider.
aws-native.kinesisfirehose.DeliveryStream
Explore with Pulumi AI
We recommend new projects start with resources from the AWS provider.
Resource Type definition for AWS::KinesisFirehose::DeliveryStream
Create DeliveryStream Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new DeliveryStream(name: string, args?: DeliveryStreamArgs, opts?: CustomResourceOptions);
@overload
def DeliveryStream(resource_name: str,
args: Optional[DeliveryStreamArgs] = None,
opts: Optional[ResourceOptions] = None)
@overload
def DeliveryStream(resource_name: str,
opts: Optional[ResourceOptions] = None,
amazon_open_search_serverless_destination_configuration: Optional[DeliveryStreamAmazonOpenSearchServerlessDestinationConfigurationArgs] = None,
amazonopensearchservice_destination_configuration: Optional[DeliveryStreamAmazonopensearchserviceDestinationConfigurationArgs] = None,
delivery_stream_encryption_configuration_input: Optional[DeliveryStreamEncryptionConfigurationInputArgs] = None,
delivery_stream_name: Optional[str] = None,
delivery_stream_type: Optional[DeliveryStreamType] = None,
elasticsearch_destination_configuration: Optional[DeliveryStreamElasticsearchDestinationConfigurationArgs] = None,
extended_s3_destination_configuration: Optional[DeliveryStreamExtendedS3DestinationConfigurationArgs] = None,
http_endpoint_destination_configuration: Optional[DeliveryStreamHttpEndpointDestinationConfigurationArgs] = None,
iceberg_destination_configuration: Optional[DeliveryStreamIcebergDestinationConfigurationArgs] = None,
kinesis_stream_source_configuration: Optional[DeliveryStreamKinesisStreamSourceConfigurationArgs] = None,
msk_source_configuration: Optional[DeliveryStreamMskSourceConfigurationArgs] = None,
redshift_destination_configuration: Optional[DeliveryStreamRedshiftDestinationConfigurationArgs] = None,
s3_destination_configuration: Optional[DeliveryStreamS3DestinationConfigurationArgs] = None,
snowflake_destination_configuration: Optional[DeliveryStreamSnowflakeDestinationConfigurationArgs] = None,
splunk_destination_configuration: Optional[DeliveryStreamSplunkDestinationConfigurationArgs] = None,
tags: Optional[Sequence[_root_inputs.TagArgs]] = None)
func NewDeliveryStream(ctx *Context, name string, args *DeliveryStreamArgs, opts ...ResourceOption) (*DeliveryStream, error)
public DeliveryStream(string name, DeliveryStreamArgs? args = null, CustomResourceOptions? opts = null)
public DeliveryStream(String name, DeliveryStreamArgs args)
public DeliveryStream(String name, DeliveryStreamArgs args, CustomResourceOptions options)
type: aws-native:kinesisfirehose:DeliveryStream
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args DeliveryStreamArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DeliveryStreamArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DeliveryStreamArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DeliveryStreamArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DeliveryStreamArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
DeliveryStream Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The DeliveryStream resource accepts the following input properties:
- Amazon
Open Pulumi.Search Serverless Destination Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazon Open Search Serverless Destination Configuration - Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- Amazonopensearchservice
Destination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazonopensearchservice Destination Configuration - The destination in Amazon OpenSearch Service. You can specify only one destination.
- Delivery
Stream Pulumi.Encryption Configuration Input Aws Native. Kinesis Firehose. Inputs. Delivery Stream Encryption Configuration Input - Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- Delivery
Stream stringName - The name of the delivery stream.
- Delivery
Stream Pulumi.Type Aws Native. Kinesis Firehose. Delivery Stream Type - The delivery stream type. This can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
- Elasticsearch
Destination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Elasticsearch Destination Configuration An Amazon ES destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions .
- Extended
S3Destination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Extended S3Destination Configuration An Amazon S3 destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .
- Http
Endpoint Pulumi.Destination Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Http Endpoint Destination Configuration - Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- Iceberg
Destination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Iceberg Destination Configuration Specifies the destination configure settings for Apache Iceberg Table.
Amazon Data Firehose is in preview release and is subject to change.
- Kinesis
Stream Pulumi.Source Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Kinesis Stream Source Configuration - When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- Msk
Source Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Msk Source Configuration - The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- Redshift
Destination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Redshift Destination Configuration An Amazon Redshift destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .
- S3Destination
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration The
S3DestinationConfiguration
property type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions .
- Snowflake
Destination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Snowflake Destination Configuration - Configure Snowflake destination
- Splunk
Destination Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Splunk Destination Configuration - The configuration of a destination in Splunk for the delivery stream.
- List<Pulumi.
Aws Native. Inputs. Tag> A set of tags to assign to the delivery stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
You can specify up to 50 tags when creating a delivery stream.
If you specify tags in the
CreateDeliveryStream
action, Amazon Data Firehose performs an additional authorization on thefirehose:TagDeliveryStream
action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose delivery streams with IAM resource tags will fail with anAccessDeniedException
such as following.AccessDeniedException
User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.
For an example IAM policy, see Tag example.
- Amazon
Open DeliverySearch Serverless Destination Configuration Stream Amazon Open Search Serverless Destination Configuration Args - Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- Amazonopensearchservice
Destination DeliveryConfiguration Stream Amazonopensearchservice Destination Configuration Args - The destination in Amazon OpenSearch Service. You can specify only one destination.
- Delivery
Stream DeliveryEncryption Configuration Input Stream Encryption Configuration Input Type Args - Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- Delivery
Stream stringName - The name of the delivery stream.
- Delivery
Stream DeliveryType Stream Type - The delivery stream type. This can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
- Elasticsearch
Destination DeliveryConfiguration Stream Elasticsearch Destination Configuration Args An Amazon ES destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions .
- Extended
S3Destination DeliveryConfiguration Stream Extended S3Destination Configuration Args An Amazon S3 destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .
- Http
Endpoint DeliveryDestination Configuration Stream Http Endpoint Destination Configuration Args - Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- Iceberg
Destination DeliveryConfiguration Stream Iceberg Destination Configuration Args Specifies the destination configure settings for Apache Iceberg Table.
Amazon Data Firehose is in preview release and is subject to change.
- Kinesis
Stream DeliverySource Configuration Stream Kinesis Stream Source Configuration Args - When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- Msk
Source DeliveryConfiguration Stream Msk Source Configuration Args - The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- Redshift
Destination DeliveryConfiguration Stream Redshift Destination Configuration Args An Amazon Redshift destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .
- S3Destination
Configuration DeliveryStream S3Destination Configuration Args The
S3DestinationConfiguration
property type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions .
- Snowflake
Destination DeliveryConfiguration Stream Snowflake Destination Configuration Args - Configure Snowflake destination
- Splunk
Destination DeliveryConfiguration Stream Splunk Destination Configuration Args - The configuration of a destination in Splunk for the delivery stream.
- Tag
Args A set of tags to assign to the delivery stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
You can specify up to 50 tags when creating a delivery stream.
If you specify tags in the
CreateDeliveryStream
action, Amazon Data Firehose performs an additional authorization on thefirehose:TagDeliveryStream
action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose delivery streams with IAM resource tags will fail with anAccessDeniedException
such as following.AccessDeniedException
User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.
For an example IAM policy, see Tag example.
- amazon
Open DeliverySearch Serverless Destination Configuration Stream Amazon Open Search Serverless Destination Configuration - Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- amazonopensearchservice
Destination DeliveryConfiguration Stream Amazonopensearchservice Destination Configuration - The destination in Amazon OpenSearch Service. You can specify only one destination.
- delivery
Stream DeliveryEncryption Configuration Input Stream Encryption Configuration Input - Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- delivery
Stream StringName - The name of the delivery stream.
- delivery
Stream DeliveryType Stream Type - The delivery stream type. This can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
- elasticsearch
Destination DeliveryConfiguration Stream Elasticsearch Destination Configuration An Amazon ES destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions .
- extended
S3Destination DeliveryConfiguration Stream Extended S3Destination Configuration An Amazon S3 destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .
- http
Endpoint DeliveryDestination Configuration Stream Http Endpoint Destination Configuration - Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- iceberg
Destination DeliveryConfiguration Stream Iceberg Destination Configuration Specifies the destination configure settings for Apache Iceberg Table.
Amazon Data Firehose is in preview release and is subject to change.
- kinesis
Stream DeliverySource Configuration Stream Kinesis Stream Source Configuration - When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- msk
Source DeliveryConfiguration Stream Msk Source Configuration - The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- redshift
Destination DeliveryConfiguration Stream Redshift Destination Configuration An Amazon Redshift destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .
- s3Destination
Configuration DeliveryStream S3Destination Configuration The
S3DestinationConfiguration
property type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions .
- snowflake
Destination DeliveryConfiguration Stream Snowflake Destination Configuration - Configure Snowflake destination
- splunk
Destination DeliveryConfiguration Stream Splunk Destination Configuration - The configuration of a destination in Splunk for the delivery stream.
- List<Tag>
A set of tags to assign to the delivery stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
You can specify up to 50 tags when creating a delivery stream.
If you specify tags in the
CreateDeliveryStream
action, Amazon Data Firehose performs an additional authorization on thefirehose:TagDeliveryStream
action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose delivery streams with IAM resource tags will fail with anAccessDeniedException
such as following.AccessDeniedException
User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.
For an example IAM policy, see Tag example.
- amazon
Open DeliverySearch Serverless Destination Configuration Stream Amazon Open Search Serverless Destination Configuration - Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- amazonopensearchservice
Destination DeliveryConfiguration Stream Amazonopensearchservice Destination Configuration - The destination in Amazon OpenSearch Service. You can specify only one destination.
- delivery
Stream DeliveryEncryption Configuration Input Stream Encryption Configuration Input - Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- delivery
Stream stringName - The name of the delivery stream.
- delivery
Stream DeliveryType Stream Type - The delivery stream type. This can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
- elasticsearch
Destination DeliveryConfiguration Stream Elasticsearch Destination Configuration An Amazon ES destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions .
- extended
S3Destination DeliveryConfiguration Stream Extended S3Destination Configuration An Amazon S3 destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .
- http
Endpoint DeliveryDestination Configuration Stream Http Endpoint Destination Configuration - Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- iceberg
Destination DeliveryConfiguration Stream Iceberg Destination Configuration Specifies the destination configure settings for Apache Iceberg Table.
Amazon Data Firehose is in preview release and is subject to change.
- kinesis
Stream DeliverySource Configuration Stream Kinesis Stream Source Configuration - When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- msk
Source DeliveryConfiguration Stream Msk Source Configuration - The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- redshift
Destination DeliveryConfiguration Stream Redshift Destination Configuration An Amazon Redshift destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .
- s3Destination
Configuration DeliveryStream S3Destination Configuration The
S3DestinationConfiguration
property type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions .
- snowflake
Destination DeliveryConfiguration Stream Snowflake Destination Configuration - Configure Snowflake destination
- splunk
Destination DeliveryConfiguration Stream Splunk Destination Configuration - The configuration of a destination in Splunk for the delivery stream.
- Tag[]
A set of tags to assign to the delivery stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
You can specify up to 50 tags when creating a delivery stream.
If you specify tags in the
CreateDeliveryStream
action, Amazon Data Firehose performs an additional authorization on thefirehose:TagDeliveryStream
action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose delivery streams with IAM resource tags will fail with anAccessDeniedException
such as following.AccessDeniedException
User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.
For an example IAM policy, see Tag example.
- amazon_
open_ Deliverysearch_ serverless_ destination_ configuration Stream Amazon Open Search Serverless Destination Configuration Args - Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- amazonopensearchservice_
destination_ Deliveryconfiguration Stream Amazonopensearchservice Destination Configuration Args - The destination in Amazon OpenSearch Service. You can specify only one destination.
- delivery_
stream_ Deliveryencryption_ configuration_ input Stream Encryption Configuration Input Args - Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- delivery_
stream_ strname - The name of the delivery stream.
- delivery_
stream_ Deliverytype Stream Type - The delivery stream type. This can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
- elasticsearch_
destination_ Deliveryconfiguration Stream Elasticsearch Destination Configuration Args An Amazon ES destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions .
- extended_
s3_ Deliverydestination_ configuration Stream Extended S3Destination Configuration Args An Amazon S3 destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .
- http_
endpoint_ Deliverydestination_ configuration Stream Http Endpoint Destination Configuration Args - Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- iceberg_
destination_ Deliveryconfiguration Stream Iceberg Destination Configuration Args Specifies the destination configure settings for Apache Iceberg Table.
Amazon Data Firehose is in preview release and is subject to change.
- kinesis_
stream_ Deliverysource_ configuration Stream Kinesis Stream Source Configuration Args - When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- msk_
source_ Deliveryconfiguration Stream Msk Source Configuration Args - The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- redshift_
destination_ Deliveryconfiguration Stream Redshift Destination Configuration Args An Amazon Redshift destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .
- s3_
destination_ Deliveryconfiguration Stream S3Destination Configuration Args The
S3DestinationConfiguration
property type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions .
- snowflake_
destination_ Deliveryconfiguration Stream Snowflake Destination Configuration Args - Configure Snowflake destination
- splunk_
destination_ Deliveryconfiguration Stream Splunk Destination Configuration Args - The configuration of a destination in Splunk for the delivery stream.
- Sequence[Tag
Args] A set of tags to assign to the delivery stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
You can specify up to 50 tags when creating a delivery stream.
If you specify tags in the
CreateDeliveryStream
action, Amazon Data Firehose performs an additional authorization on thefirehose:TagDeliveryStream
action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose delivery streams with IAM resource tags will fail with anAccessDeniedException
such as following.AccessDeniedException
User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.
For an example IAM policy, see Tag example.
- amazon
Open Property MapSearch Serverless Destination Configuration - Describes the configuration of a destination in the Serverless offering for Amazon OpenSearch Service.
- amazonopensearchservice
Destination Property MapConfiguration - The destination in Amazon OpenSearch Service. You can specify only one destination.
- delivery
Stream Property MapEncryption Configuration Input - Specifies the type and Amazon Resource Name (ARN) of the CMK to use for Server-Side Encryption (SSE).
- delivery
Stream StringName - The name of the delivery stream.
- delivery
Stream "DirectType Put" | "Kinesis Stream As Source" | "MSKAs Source" - The delivery stream type. This can be one of the following values:
DirectPut
: Provider applications access the delivery stream directly.KinesisStreamAsSource
: The delivery stream uses a Kinesis data stream as a source.
- elasticsearch
Destination Property MapConfiguration An Amazon ES destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon ES destination to an Amazon S3 or Amazon Redshift destination, update requires some interruptions .
- extended
S3Destination Property MapConfiguration An Amazon S3 destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Extended S3 destination to an Amazon ES destination, update requires some interruptions .
- http
Endpoint Property MapDestination Configuration - Enables configuring Kinesis Firehose to deliver data to any HTTP endpoint destination. You can specify only one destination.
- iceberg
Destination Property MapConfiguration Specifies the destination configure settings for Apache Iceberg Table.
Amazon Data Firehose is in preview release and is subject to change.
- kinesis
Stream Property MapSource Configuration - When a Kinesis stream is used as the source for the delivery stream, a KinesisStreamSourceConfiguration containing the Kinesis stream ARN and the role ARN for the source stream.
- msk
Source Property MapConfiguration - The configuration for the Amazon MSK cluster to be used as the source for a delivery stream.
- redshift
Destination Property MapConfiguration An Amazon Redshift destination for the delivery stream.
Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon Redshift destination to an Amazon ES destination, update requires some interruptions .
- s3Destination
Configuration Property Map The
S3DestinationConfiguration
property type specifies an Amazon Simple Storage Service (Amazon S3) destination to which Amazon Kinesis Data Firehose (Kinesis Data Firehose) delivers data.Conditional. You must specify only one destination configuration.
If you change the delivery stream destination from an Amazon S3 destination to an Amazon ES destination, update requires some interruptions .
- snowflake
Destination Property MapConfiguration - Configure Snowflake destination
- splunk
Destination Property MapConfiguration - The configuration of a destination in Splunk for the delivery stream.
- List<Property Map>
A set of tags to assign to the delivery stream. A tag is a key-value pair that you can define and assign to AWS resources. Tags are metadata. For example, you can add friendly names and descriptions or other types of information that can help you distinguish the delivery stream. For more information about tags, see Using Cost Allocation Tags in the AWS Billing and Cost Management User Guide.
You can specify up to 50 tags when creating a delivery stream.
If you specify tags in the
CreateDeliveryStream
action, Amazon Data Firehose performs an additional authorization on thefirehose:TagDeliveryStream
action to verify if users have permissions to create tags. If you do not provide this permission, requests to create new Firehose delivery streams with IAM resource tags will fail with anAccessDeniedException
such as following.AccessDeniedException
User: arn:aws:sts::x:assumed-role/x/x is not authorized to perform: firehose:TagDeliveryStream on resource: arn:aws:firehose:us-east-1:x:deliverystream/x with an explicit deny in an identity-based policy.
For an example IAM policy, see Tag example.
Outputs
All input properties are implicitly available as output properties. Additionally, the DeliveryStream resource produces the following output properties:
Supporting Types
DeliveryStreamAmazonOpenSearchServerlessBufferingHints, DeliveryStreamAmazonOpenSearchServerlessBufferingHintsArgs
- Interval
In intSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- Size
In intMbs Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- Interval
In intSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- Size
In intMbs Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- interval
In IntegerSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- size
In IntegerMbs Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- interval
In numberSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- size
In numberMbs Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- interval_
in_ intseconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- size_
in_ intmbs Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- interval
In NumberSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- size
In NumberMbs Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
DeliveryStreamAmazonOpenSearchServerlessDestinationConfiguration, DeliveryStreamAmazonOpenSearchServerlessDestinationConfigurationArgs
- Index
Name string - The Serverless offering for Amazon OpenSearch Service index name.
- Role
Arn string - The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- S3Configuration
Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration - Buffering
Hints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazon Open Search Serverless Buffering Hints - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- Cloud
Watch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options - Collection
Endpoint string - The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- Processing
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration - Retry
Options Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazon Open Search Serverless Retry Options - The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- S3Backup
Mode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Amazon Open Search Serverless Destination Configuration S3Backup Mode - Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- Vpc
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Vpc Configuration
- Index
Name string - The Serverless offering for Amazon OpenSearch Service index name.
- Role
Arn string - The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- S3Configuration
Delivery
Stream S3Destination Configuration - Buffering
Hints DeliveryStream Amazon Open Search Serverless Buffering Hints - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- Cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - Collection
Endpoint string - The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- Processing
Configuration DeliveryStream Processing Configuration - Retry
Options DeliveryStream Amazon Open Search Serverless Retry Options - The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- S3Backup
Mode DeliveryStream Amazon Open Search Serverless Destination Configuration S3Backup Mode - Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- Vpc
Configuration DeliveryStream Vpc Configuration
- index
Name String - The Serverless offering for Amazon OpenSearch Service index name.
- role
Arn String - The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration
Delivery
Stream S3Destination Configuration - buffering
Hints DeliveryStream Amazon Open Search Serverless Buffering Hints - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - collection
Endpoint String - The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- processing
Configuration DeliveryStream Processing Configuration - retry
Options DeliveryStream Amazon Open Search Serverless Retry Options - The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3Backup
Mode DeliveryStream Amazon Open Search Serverless Destination Configuration S3Backup Mode - Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- vpc
Configuration DeliveryStream Vpc Configuration
- index
Name string - The Serverless offering for Amazon OpenSearch Service index name.
- role
Arn string - The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration
Delivery
Stream S3Destination Configuration - buffering
Hints DeliveryStream Amazon Open Search Serverless Buffering Hints - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - collection
Endpoint string - The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- processing
Configuration DeliveryStream Processing Configuration - retry
Options DeliveryStream Amazon Open Search Serverless Retry Options - The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3Backup
Mode DeliveryStream Amazon Open Search Serverless Destination Configuration S3Backup Mode - Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- vpc
Configuration DeliveryStream Vpc Configuration
- index_
name str - The Serverless offering for Amazon OpenSearch Service index name.
- role_
arn str - The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- s3_
configuration DeliveryStream S3Destination Configuration - buffering_
hints DeliveryStream Amazon Open Search Serverless Buffering Hints - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud_
watch_ Deliverylogging_ options Stream Cloud Watch Logging Options - collection_
endpoint str - The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- processing_
configuration DeliveryStream Processing Configuration - retry_
options DeliveryStream Amazon Open Search Serverless Retry Options - The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3_
backup_ Deliverymode Stream Amazon Open Search Serverless Destination Configuration S3Backup Mode - Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- vpc_
configuration DeliveryStream Vpc Configuration
- index
Name String - The Serverless offering for Amazon OpenSearch Service index name.
- role
Arn String - The Amazon Resource Name (ARN) of the IAM role to be assumed by Firehose for calling the Serverless offering for Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration Property Map
- buffering
Hints Property Map - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud
Watch Property MapLogging Options - collection
Endpoint String - The endpoint to use when communicating with the collection in the Serverless offering for Amazon OpenSearch Service.
- processing
Configuration Property Map - retry
Options Property Map - The retry behavior in case Firehose is unable to deliver documents to the Serverless offering for Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3Backup
Mode "FailedDocuments Only" | "All Documents" - Defines how documents should be delivered to Amazon S3. When it is set to FailedDocumentsOnly, Firehose writes any documents that could not be indexed to the configured Amazon S3 destination, with AmazonOpenSearchService-failed/ appended to the key prefix. When set to AllDocuments, Firehose delivers all incoming records to Amazon S3, and also writes failed documents with AmazonOpenSearchService-failed/ appended to the prefix.
- vpc
Configuration Property Map
DeliveryStreamAmazonOpenSearchServerlessDestinationConfigurationS3BackupMode, DeliveryStreamAmazonOpenSearchServerlessDestinationConfigurationS3BackupModeArgs
- Failed
Documents Only - FailedDocumentsOnly
- All
Documents - AllDocuments
- Delivery
Stream Amazon Open Search Serverless Destination Configuration S3Backup Mode Failed Documents Only - FailedDocumentsOnly
- Delivery
Stream Amazon Open Search Serverless Destination Configuration S3Backup Mode All Documents - AllDocuments
- Failed
Documents Only - FailedDocumentsOnly
- All
Documents - AllDocuments
- Failed
Documents Only - FailedDocumentsOnly
- All
Documents - AllDocuments
- FAILED_DOCUMENTS_ONLY
- FailedDocumentsOnly
- ALL_DOCUMENTS
- AllDocuments
- "Failed
Documents Only" - FailedDocumentsOnly
- "All
Documents" - AllDocuments
DeliveryStreamAmazonOpenSearchServerlessRetryOptions, DeliveryStreamAmazonOpenSearchServerlessRetryOptionsArgs
- Duration
In intSeconds - After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- Duration
In intSeconds - After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- duration
In IntegerSeconds - After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- duration
In numberSeconds - After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- duration_
in_ intseconds - After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- duration
In NumberSeconds - After an initial failure to deliver to the Serverless offering for Amazon OpenSearch Service, the total amount of time during which Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
DeliveryStreamAmazonopensearchserviceBufferingHints, DeliveryStreamAmazonopensearchserviceBufferingHintsArgs
- Interval
In intSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- Size
In intMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- Interval
In intSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- Size
In intMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- interval
In IntegerSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- size
In IntegerMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- interval
In numberSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- size
In numberMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- interval_
in_ intseconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- size_
in_ intmbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
- interval
In NumberSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 300 (5 minutes).
- size
In NumberMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5. We recommend setting this parameter to a value greater than the amount of data you typically ingest into the delivery stream in 10 seconds. For example, if you typically ingest data at 1 MB/sec, the value should be 10 MB or higher.
DeliveryStreamAmazonopensearchserviceDestinationConfiguration, DeliveryStreamAmazonopensearchserviceDestinationConfigurationArgs
- Index
Name string - The Amazon OpenSearch Service index name.
- Role
Arn string - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- S3Configuration
Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration - Describes the configuration of a destination in Amazon S3.
- Buffering
Hints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazonopensearchservice Buffering Hints - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- Cloud
Watch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- Cluster
Endpoint string - The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- Document
Id Pulumi.Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Document Id Options - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- Domain
Arn string - The ARN of the Amazon OpenSearch Service domain.
- Index
Rotation Pulumi.Period Aws Native. Kinesis Firehose. Delivery Stream Amazonopensearchservice Destination Configuration Index Rotation Period - The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- Processing
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration - Describes a data processing configuration.
- Retry
Options Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Amazonopensearchservice Retry Options - The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- S3Backup
Mode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Amazonopensearchservice Destination Configuration S3Backup Mode - Defines how documents should be delivered to Amazon S3.
- Type
Name string - The Amazon OpenSearch Service type name.
- Vpc
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Vpc Configuration - The details of the VPC of the Amazon OpenSearch Service destination.
- Index
Name string - The Amazon OpenSearch Service index name.
- Role
Arn string - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- S3Configuration
Delivery
Stream S3Destination Configuration - Describes the configuration of a destination in Amazon S3.
- Buffering
Hints DeliveryStream Amazonopensearchservice Buffering Hints - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- Cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- Cluster
Endpoint string - The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- Document
Id DeliveryOptions Stream Document Id Options - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- Domain
Arn string - The ARN of the Amazon OpenSearch Service domain.
- Index
Rotation DeliveryPeriod Stream Amazonopensearchservice Destination Configuration Index Rotation Period - The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- Processing
Configuration DeliveryStream Processing Configuration - Describes a data processing configuration.
- Retry
Options DeliveryStream Amazonopensearchservice Retry Options - The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- S3Backup
Mode DeliveryStream Amazonopensearchservice Destination Configuration S3Backup Mode - Defines how documents should be delivered to Amazon S3.
- Type
Name string - The Amazon OpenSearch Service type name.
- Vpc
Configuration DeliveryStream Vpc Configuration - The details of the VPC of the Amazon OpenSearch Service destination.
- index
Name String - The Amazon OpenSearch Service index name.
- role
Arn String - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration
Delivery
Stream S3Destination Configuration - Describes the configuration of a destination in Amazon S3.
- buffering
Hints DeliveryStream Amazonopensearchservice Buffering Hints - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- cluster
Endpoint String - The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- document
Id DeliveryOptions Stream Document Id Options - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain
Arn String - The ARN of the Amazon OpenSearch Service domain.
- index
Rotation DeliveryPeriod Stream Amazonopensearchservice Destination Configuration Index Rotation Period - The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- processing
Configuration DeliveryStream Processing Configuration - Describes a data processing configuration.
- retry
Options DeliveryStream Amazonopensearchservice Retry Options - The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3Backup
Mode DeliveryStream Amazonopensearchservice Destination Configuration S3Backup Mode - Defines how documents should be delivered to Amazon S3.
- type
Name String - The Amazon OpenSearch Service type name.
- vpc
Configuration DeliveryStream Vpc Configuration - The details of the VPC of the Amazon OpenSearch Service destination.
- index
Name string - The Amazon OpenSearch Service index name.
- role
Arn string - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration
Delivery
Stream S3Destination Configuration - Describes the configuration of a destination in Amazon S3.
- buffering
Hints DeliveryStream Amazonopensearchservice Buffering Hints - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- cluster
Endpoint string - The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- document
Id DeliveryOptions Stream Document Id Options - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain
Arn string - The ARN of the Amazon OpenSearch Service domain.
- index
Rotation DeliveryPeriod Stream Amazonopensearchservice Destination Configuration Index Rotation Period - The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- processing
Configuration DeliveryStream Processing Configuration - Describes a data processing configuration.
- retry
Options DeliveryStream Amazonopensearchservice Retry Options - The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3Backup
Mode DeliveryStream Amazonopensearchservice Destination Configuration S3Backup Mode - Defines how documents should be delivered to Amazon S3.
- type
Name string - The Amazon OpenSearch Service type name.
- vpc
Configuration DeliveryStream Vpc Configuration - The details of the VPC of the Amazon OpenSearch Service destination.
- index_
name str - The Amazon OpenSearch Service index name.
- role_
arn str - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- s3_
configuration DeliveryStream S3Destination Configuration - Describes the configuration of a destination in Amazon S3.
- buffering_
hints DeliveryStream Amazonopensearchservice Buffering Hints - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud_
watch_ Deliverylogging_ options Stream Cloud Watch Logging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- cluster_
endpoint str - The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- document_
id_ Deliveryoptions Stream Document Id Options - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain_
arn str - The ARN of the Amazon OpenSearch Service domain.
- index_
rotation_ Deliveryperiod Stream Amazonopensearchservice Destination Configuration Index Rotation Period - The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- processing_
configuration DeliveryStream Processing Configuration - Describes a data processing configuration.
- retry_
options DeliveryStream Amazonopensearchservice Retry Options - The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3_
backup_ Deliverymode Stream Amazonopensearchservice Destination Configuration S3Backup Mode - Defines how documents should be delivered to Amazon S3.
- type_
name str - The Amazon OpenSearch Service type name.
- vpc_
configuration DeliveryStream Vpc Configuration - The details of the VPC of the Amazon OpenSearch Service destination.
- index
Name String - The Amazon OpenSearch Service index name.
- role
Arn String - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon OpenSearch Service Configuration API and for indexing documents.
- s3Configuration Property Map
- Describes the configuration of a destination in Amazon S3.
- buffering
Hints Property Map - The buffering options. If no value is specified, the default values for AmazonopensearchserviceBufferingHints are used.
- cloud
Watch Property MapLogging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- cluster
Endpoint String - The endpoint to use when communicating with the cluster. Specify either this ClusterEndpoint or the DomainARN field.
- document
Id Property MapOptions - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain
Arn String - The ARN of the Amazon OpenSearch Service domain.
- index
Rotation "NoPeriod Rotation" | "One Hour" | "One Day" | "One Week" | "One Month" - The Amazon OpenSearch Service index rotation period. Index rotation appends a timestamp to the IndexName to facilitate the expiration of old data.
- processing
Configuration Property Map - Describes a data processing configuration.
- retry
Options Property Map - The retry behavior in case Kinesis Data Firehose is unable to deliver documents to Amazon OpenSearch Service. The default value is 300 (5 minutes).
- s3Backup
Mode "FailedDocuments Only" | "All Documents" - Defines how documents should be delivered to Amazon S3.
- type
Name String - The Amazon OpenSearch Service type name.
- vpc
Configuration Property Map - The details of the VPC of the Amazon OpenSearch Service destination.
DeliveryStreamAmazonopensearchserviceDestinationConfigurationIndexRotationPeriod, DeliveryStreamAmazonopensearchserviceDestinationConfigurationIndexRotationPeriodArgs
- No
Rotation - NoRotation
- One
Hour - OneHour
- One
Day - OneDay
- One
Week - OneWeek
- One
Month - OneMonth
- Delivery
Stream Amazonopensearchservice Destination Configuration Index Rotation Period No Rotation - NoRotation
- Delivery
Stream Amazonopensearchservice Destination Configuration Index Rotation Period One Hour - OneHour
- Delivery
Stream Amazonopensearchservice Destination Configuration Index Rotation Period One Day - OneDay
- Delivery
Stream Amazonopensearchservice Destination Configuration Index Rotation Period One Week - OneWeek
- Delivery
Stream Amazonopensearchservice Destination Configuration Index Rotation Period One Month - OneMonth
- No
Rotation - NoRotation
- One
Hour - OneHour
- One
Day - OneDay
- One
Week - OneWeek
- One
Month - OneMonth
- No
Rotation - NoRotation
- One
Hour - OneHour
- One
Day - OneDay
- One
Week - OneWeek
- One
Month - OneMonth
- NO_ROTATION
- NoRotation
- ONE_HOUR
- OneHour
- ONE_DAY
- OneDay
- ONE_WEEK
- OneWeek
- ONE_MONTH
- OneMonth
- "No
Rotation" - NoRotation
- "One
Hour" - OneHour
- "One
Day" - OneDay
- "One
Week" - OneWeek
- "One
Month" - OneMonth
DeliveryStreamAmazonopensearchserviceDestinationConfigurationS3BackupMode, DeliveryStreamAmazonopensearchserviceDestinationConfigurationS3BackupModeArgs
- Failed
Documents Only - FailedDocumentsOnly
- All
Documents - AllDocuments
- Delivery
Stream Amazonopensearchservice Destination Configuration S3Backup Mode Failed Documents Only - FailedDocumentsOnly
- Delivery
Stream Amazonopensearchservice Destination Configuration S3Backup Mode All Documents - AllDocuments
- Failed
Documents Only - FailedDocumentsOnly
- All
Documents - AllDocuments
- Failed
Documents Only - FailedDocumentsOnly
- All
Documents - AllDocuments
- FAILED_DOCUMENTS_ONLY
- FailedDocumentsOnly
- ALL_DOCUMENTS
- AllDocuments
- "Failed
Documents Only" - FailedDocumentsOnly
- "All
Documents" - AllDocuments
DeliveryStreamAmazonopensearchserviceRetryOptions, DeliveryStreamAmazonopensearchserviceRetryOptionsArgs
- Duration
In intSeconds - After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- Duration
In intSeconds - After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- duration
In IntegerSeconds - After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- duration
In numberSeconds - After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- duration_
in_ intseconds - After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
- duration
In NumberSeconds - After an initial failure to deliver to Amazon OpenSearch Service, the total amount of time during which Kinesis Data Firehose retries delivery (including the first attempt). After this time has elapsed, the failed documents are written to Amazon S3. Default value is 300 seconds (5 minutes). A value of 0 (zero) results in no retries.
DeliveryStreamAuthenticationConfiguration, DeliveryStreamAuthenticationConfigurationArgs
- Connectivity
Pulumi.
Aws Native. Kinesis Firehose. Delivery Stream Authentication Configuration Connectivity - The type of connectivity used to access the Amazon MSK cluster.
- Role
Arn string - The ARN of the role used to access the Amazon MSK cluster.
- Connectivity
Delivery
Stream Authentication Configuration Connectivity - The type of connectivity used to access the Amazon MSK cluster.
- Role
Arn string - The ARN of the role used to access the Amazon MSK cluster.
- connectivity
Delivery
Stream Authentication Configuration Connectivity - The type of connectivity used to access the Amazon MSK cluster.
- role
Arn String - The ARN of the role used to access the Amazon MSK cluster.
- connectivity
Delivery
Stream Authentication Configuration Connectivity - The type of connectivity used to access the Amazon MSK cluster.
- role
Arn string - The ARN of the role used to access the Amazon MSK cluster.
- connectivity
Delivery
Stream Authentication Configuration Connectivity - The type of connectivity used to access the Amazon MSK cluster.
- role_
arn str - The ARN of the role used to access the Amazon MSK cluster.
- connectivity "PUBLIC" | "PRIVATE"
- The type of connectivity used to access the Amazon MSK cluster.
- role
Arn String - The ARN of the role used to access the Amazon MSK cluster.
DeliveryStreamAuthenticationConfigurationConnectivity, DeliveryStreamAuthenticationConfigurationConnectivityArgs
- Public
- PUBLIC
- Private
- PRIVATE
- Delivery
Stream Authentication Configuration Connectivity Public - PUBLIC
- Delivery
Stream Authentication Configuration Connectivity Private - PRIVATE
- Public
- PUBLIC
- Private
- PRIVATE
- Public
- PUBLIC
- Private
- PRIVATE
- PUBLIC
- PUBLIC
- PRIVATE
- PRIVATE
- "PUBLIC"
- PUBLIC
- "PRIVATE"
- PRIVATE
DeliveryStreamBufferingHints, DeliveryStreamBufferingHintsArgs
- Interval
In intSeconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - Size
In intMbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- Interval
In intSeconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - Size
In intMbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- interval
In IntegerSeconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - size
In IntegerMbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- interval
In numberSeconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - size
In numberMbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- interval_
in_ intseconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - size_
in_ intmbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- interval
In NumberSeconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - size
In NumberMbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
DeliveryStreamCatalogConfiguration, DeliveryStreamCatalogConfigurationArgs
- Catalog
Arn string Specifies the Glue catalog ARN indentifier of the destination Apache Iceberg Tables. You must specify the ARN in the format
arn:aws:glue:region:account-id:catalog
.Amazon Data Firehose is in preview release and is subject to change.
- Catalog
Arn string Specifies the Glue catalog ARN indentifier of the destination Apache Iceberg Tables. You must specify the ARN in the format
arn:aws:glue:region:account-id:catalog
.Amazon Data Firehose is in preview release and is subject to change.
- catalog
Arn String Specifies the Glue catalog ARN indentifier of the destination Apache Iceberg Tables. You must specify the ARN in the format
arn:aws:glue:region:account-id:catalog
.Amazon Data Firehose is in preview release and is subject to change.
- catalog
Arn string Specifies the Glue catalog ARN indentifier of the destination Apache Iceberg Tables. You must specify the ARN in the format
arn:aws:glue:region:account-id:catalog
.Amazon Data Firehose is in preview release and is subject to change.
- catalog_
arn str Specifies the Glue catalog ARN indentifier of the destination Apache Iceberg Tables. You must specify the ARN in the format
arn:aws:glue:region:account-id:catalog
.Amazon Data Firehose is in preview release and is subject to change.
- catalog
Arn String Specifies the Glue catalog ARN indentifier of the destination Apache Iceberg Tables. You must specify the ARN in the format
arn:aws:glue:region:account-id:catalog
.Amazon Data Firehose is in preview release and is subject to change.
DeliveryStreamCloudWatchLoggingOptions, DeliveryStreamCloudWatchLoggingOptionsArgs
- Enabled bool
- Indicates whether CloudWatch Logs logging is enabled.
- Log
Group stringName The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.
Conditional. If you enable logging, you must specify this property.
- Log
Stream stringName The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.
Conditional. If you enable logging, you must specify this property.
- Enabled bool
- Indicates whether CloudWatch Logs logging is enabled.
- Log
Group stringName The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.
Conditional. If you enable logging, you must specify this property.
- Log
Stream stringName The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.
Conditional. If you enable logging, you must specify this property.
- enabled Boolean
- Indicates whether CloudWatch Logs logging is enabled.
- log
Group StringName The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.
Conditional. If you enable logging, you must specify this property.
- log
Stream StringName The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.
Conditional. If you enable logging, you must specify this property.
- enabled boolean
- Indicates whether CloudWatch Logs logging is enabled.
- log
Group stringName The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.
Conditional. If you enable logging, you must specify this property.
- log
Stream stringName The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.
Conditional. If you enable logging, you must specify this property.
- enabled bool
- Indicates whether CloudWatch Logs logging is enabled.
- log_
group_ strname The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.
Conditional. If you enable logging, you must specify this property.
- log_
stream_ strname The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.
Conditional. If you enable logging, you must specify this property.
- enabled Boolean
- Indicates whether CloudWatch Logs logging is enabled.
- log
Group StringName The name of the CloudWatch Logs log group that contains the log stream that Kinesis Data Firehose will use.
Conditional. If you enable logging, you must specify this property.
- log
Stream StringName The name of the CloudWatch Logs log stream that Kinesis Data Firehose uses to send logs about data delivery.
Conditional. If you enable logging, you must specify this property.
DeliveryStreamCopyCommand, DeliveryStreamCopyCommandArgs
- Data
Table stringName - The name of the target table. The table must already exist in the database.
- Copy
Options string - Parameters to use with the Amazon Redshift
COPY
command. For examples, see theCopyOptions
content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference . - Data
Table stringColumns - A comma-separated list of column names.
- Data
Table stringName - The name of the target table. The table must already exist in the database.
- Copy
Options string - Parameters to use with the Amazon Redshift
COPY
command. For examples, see theCopyOptions
content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference . - Data
Table stringColumns - A comma-separated list of column names.
- data
Table StringName - The name of the target table. The table must already exist in the database.
- copy
Options String - Parameters to use with the Amazon Redshift
COPY
command. For examples, see theCopyOptions
content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference . - data
Table StringColumns - A comma-separated list of column names.
- data
Table stringName - The name of the target table. The table must already exist in the database.
- copy
Options string - Parameters to use with the Amazon Redshift
COPY
command. For examples, see theCopyOptions
content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference . - data
Table stringColumns - A comma-separated list of column names.
- data_
table_ strname - The name of the target table. The table must already exist in the database.
- copy_
options str - Parameters to use with the Amazon Redshift
COPY
command. For examples, see theCopyOptions
content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference . - data_
table_ strcolumns - A comma-separated list of column names.
- data
Table StringName - The name of the target table. The table must already exist in the database.
- copy
Options String - Parameters to use with the Amazon Redshift
COPY
command. For examples, see theCopyOptions
content for the CopyCommand data type in the Amazon Kinesis Data Firehose API Reference . - data
Table StringColumns - A comma-separated list of column names.
DeliveryStreamDataFormatConversionConfiguration, DeliveryStreamDataFormatConversionConfigurationArgs
- Enabled bool
- Defaults to
true
. Set it tofalse
if you want to disable format conversion while preserving the configuration details. - Input
Format Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Input Format Configuration - Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if
Enabled
is set to true. - Output
Format Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Output Format Configuration - Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if
Enabled
is set to true. - Schema
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Schema Configuration - Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if
Enabled
is set to true.
- Enabled bool
- Defaults to
true
. Set it tofalse
if you want to disable format conversion while preserving the configuration details. - Input
Format DeliveryConfiguration Stream Input Format Configuration - Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if
Enabled
is set to true. - Output
Format DeliveryConfiguration Stream Output Format Configuration - Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if
Enabled
is set to true. - Schema
Configuration DeliveryStream Schema Configuration - Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if
Enabled
is set to true.
- enabled Boolean
- Defaults to
true
. Set it tofalse
if you want to disable format conversion while preserving the configuration details. - input
Format DeliveryConfiguration Stream Input Format Configuration - Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if
Enabled
is set to true. - output
Format DeliveryConfiguration Stream Output Format Configuration - Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if
Enabled
is set to true. - schema
Configuration DeliveryStream Schema Configuration - Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if
Enabled
is set to true.
- enabled boolean
- Defaults to
true
. Set it tofalse
if you want to disable format conversion while preserving the configuration details. - input
Format DeliveryConfiguration Stream Input Format Configuration - Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if
Enabled
is set to true. - output
Format DeliveryConfiguration Stream Output Format Configuration - Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if
Enabled
is set to true. - schema
Configuration DeliveryStream Schema Configuration - Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if
Enabled
is set to true.
- enabled bool
- Defaults to
true
. Set it tofalse
if you want to disable format conversion while preserving the configuration details. - input_
format_ Deliveryconfiguration Stream Input Format Configuration - Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if
Enabled
is set to true. - output_
format_ Deliveryconfiguration Stream Output Format Configuration - Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if
Enabled
is set to true. - schema_
configuration DeliveryStream Schema Configuration - Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if
Enabled
is set to true.
- enabled Boolean
- Defaults to
true
. Set it tofalse
if you want to disable format conversion while preserving the configuration details. - input
Format Property MapConfiguration - Specifies the deserializer that you want Firehose to use to convert the format of your data from JSON. This parameter is required if
Enabled
is set to true. - output
Format Property MapConfiguration - Specifies the serializer that you want Firehose to use to convert the format of your data to the Parquet or ORC format. This parameter is required if
Enabled
is set to true. - schema
Configuration Property Map - Specifies the AWS Glue Data Catalog table that contains the column information. This parameter is required if
Enabled
is set to true.
DeliveryStreamDeserializer, DeliveryStreamDeserializerArgs
- Hive
Json Pulumi.Ser De Aws Native. Kinesis Firehose. Inputs. Delivery Stream Hive Json Ser De - The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- Open
XJson Pulumi.Ser De Aws Native. Kinesis Firehose. Inputs. Delivery Stream Open XJson Ser De - The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- Hive
Json DeliverySer De Stream Hive Json Ser De - The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- Open
XJson DeliverySer De Stream Open XJson Ser De - The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- hive
Json DeliverySer De Stream Hive Json Ser De - The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- open
XJson DeliverySer De Stream Open XJson Ser De - The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- hive
Json DeliverySer De Stream Hive Json Ser De - The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- open
XJson DeliverySer De Stream Open XJson Ser De - The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- hive_
json_ Deliveryser_ de Stream Hive Json Ser De - The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- open_
x_ Deliveryjson_ ser_ de Stream Open XJson Ser De - The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
- hive
Json Property MapSer De - The native Hive / HCatalog JsonSerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the OpenX SerDe.
- open
XJson Property MapSer De - The OpenX SerDe. Used by Firehose for deserializing data, which means converting it from the JSON format in preparation for serializing it to the Parquet or ORC format. This is one of two deserializers you can choose, depending on which one offers the functionality you need. The other option is the native Hive / HCatalog JsonSerDe.
DeliveryStreamDestinationTableConfiguration, DeliveryStreamDestinationTableConfigurationArgs
- Destination
Database stringName - Destination
Table stringName - S3Error
Output stringPrefix - Unique
Keys List<string>
- Destination
Database stringName - Destination
Table stringName - S3Error
Output stringPrefix - Unique
Keys []string
- destination
Database StringName - destination
Table StringName - s3Error
Output StringPrefix - unique
Keys List<String>
- destination
Database stringName - destination
Table stringName - s3Error
Output stringPrefix - unique
Keys string[]
- destination_
database_ strname - destination_
table_ strname - s3_
error_ stroutput_ prefix - unique_
keys Sequence[str]
- destination
Database StringName - destination
Table StringName - s3Error
Output StringPrefix - unique
Keys List<String>
DeliveryStreamDocumentIdOptions, DeliveryStreamDocumentIdOptionsArgs
- Default
Document Pulumi.Id Format Aws Native. Kinesis Firehose. Delivery Stream Document Id Options Default Document Id Format When the
FIREHOSE_DEFAULT
option is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.When the
NO_DOCUMENT_ID
option is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
- Default
Document DeliveryId Format Stream Document Id Options Default Document Id Format When the
FIREHOSE_DEFAULT
option is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.When the
NO_DOCUMENT_ID
option is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
- default
Document DeliveryId Format Stream Document Id Options Default Document Id Format When the
FIREHOSE_DEFAULT
option is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.When the
NO_DOCUMENT_ID
option is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
- default
Document DeliveryId Format Stream Document Id Options Default Document Id Format When the
FIREHOSE_DEFAULT
option is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.When the
NO_DOCUMENT_ID
option is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
- default_
document_ Deliveryid_ format Stream Document Id Options Default Document Id Format When the
FIREHOSE_DEFAULT
option is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.When the
NO_DOCUMENT_ID
option is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
- default
Document "FIREHOSE_DEFAULT" | "NO_DOCUMENT_ID"Id Format When the
FIREHOSE_DEFAULT
option is chosen, Firehose generates a unique document ID for each record based on a unique internal identifier. The generated document ID is stable across multiple delivery attempts, which helps prevent the same record from being indexed multiple times with different document IDs.When the
NO_DOCUMENT_ID
option is chosen, Firehose does not include any document IDs in the requests it sends to the Amazon OpenSearch Service. This causes the Amazon OpenSearch Service domain to generate document IDs. In case of multiple delivery attempts, this may cause the same record to be indexed more than once with different document IDs. This option enables write-heavy operations, such as the ingestion of logs and observability data, to consume less resources in the Amazon OpenSearch Service domain, resulting in improved performance.
DeliveryStreamDocumentIdOptionsDefaultDocumentIdFormat, DeliveryStreamDocumentIdOptionsDefaultDocumentIdFormatArgs
- Firehose
Default - FIREHOSE_DEFAULT
- No
Document Id - NO_DOCUMENT_ID
- Delivery
Stream Document Id Options Default Document Id Format Firehose Default - FIREHOSE_DEFAULT
- Delivery
Stream Document Id Options Default Document Id Format No Document Id - NO_DOCUMENT_ID
- Firehose
Default - FIREHOSE_DEFAULT
- No
Document Id - NO_DOCUMENT_ID
- Firehose
Default - FIREHOSE_DEFAULT
- No
Document Id - NO_DOCUMENT_ID
- FIREHOSE_DEFAULT
- FIREHOSE_DEFAULT
- NO_DOCUMENT_ID
- NO_DOCUMENT_ID
- "FIREHOSE_DEFAULT"
- FIREHOSE_DEFAULT
- "NO_DOCUMENT_ID"
- NO_DOCUMENT_ID
DeliveryStreamDynamicPartitioningConfiguration, DeliveryStreamDynamicPartitioningConfigurationArgs
- Enabled bool
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- Retry
Options Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Retry Options - Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
- Enabled bool
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- Retry
Options DeliveryStream Retry Options - Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
- enabled Boolean
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- retry
Options DeliveryStream Retry Options - Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
- enabled boolean
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- retry
Options DeliveryStream Retry Options - Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
- enabled bool
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- retry_
options DeliveryStream Retry Options - Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
- enabled Boolean
- Specifies whether dynamic partitioning is enabled for this Kinesis Data Firehose delivery stream.
- retry
Options Property Map - Specifies the retry behavior in case Kinesis Data Firehose is unable to deliver data to an Amazon S3 prefix.
DeliveryStreamElasticsearchBufferingHints, DeliveryStreamElasticsearchBufferingHintsArgs
- Interval
In intSeconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - Size
In intMbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- Interval
In intSeconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - Size
In intMbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- interval
In IntegerSeconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - size
In IntegerMbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- interval
In numberSeconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - size
In numberMbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- interval_
in_ intseconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - size_
in_ intmbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
- interval
In NumberSeconds - The length of time, in seconds, that Kinesis Data Firehose buffers incoming data before delivering it to the destination. For valid values, see the
IntervalInSeconds
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference . - size
In NumberMbs - The size of the buffer, in MBs, that Kinesis Data Firehose uses for incoming data before delivering it to the destination. For valid values, see the
SizeInMBs
content for the BufferingHints data type in the Amazon Kinesis Data Firehose API Reference .
DeliveryStreamElasticsearchDestinationConfiguration, DeliveryStreamElasticsearchDestinationConfigurationArgs
- Index
Name string - The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- Role
Arn string - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- S3Configuration
Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration - The S3 bucket where Kinesis Data Firehose backs up incoming data.
- Buffering
Hints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Elasticsearch Buffering Hints - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- Cloud
Watch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options - The Amazon CloudWatch Logs logging options for the delivery stream.
- Cluster
Endpoint string - The endpoint to use when communicating with the cluster. Specify either this
ClusterEndpoint
or theDomainARN
field. - Document
Id Pulumi.Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Document Id Options - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- Domain
Arn string The ARN of the Amazon ES domain. The IAM role must have permissions for
DescribeElasticsearchDomain
,DescribeElasticsearchDomains
, andDescribeElasticsearchDomainConfig
after assuming the role specified in RoleARN .Specify either
ClusterEndpoint
orDomainARN
.- Index
Rotation Pulumi.Period Aws Native. Kinesis Firehose. Delivery Stream Elasticsearch Destination Configuration Index Rotation Period - The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- Processing
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- Retry
Options Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Elasticsearch Retry Options - The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- S3Backup
Mode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Elasticsearch Destination Configuration S3Backup Mode - The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the
S3BackupMode
content for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - Type
Name string - The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- Vpc
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Vpc Configuration - The details of the VPC of the Amazon ES destination.
- Index
Name string - The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- Role
Arn string - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- S3Configuration
Delivery
Stream S3Destination Configuration - The S3 bucket where Kinesis Data Firehose backs up incoming data.
- Buffering
Hints DeliveryStream Elasticsearch Buffering Hints - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- Cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The Amazon CloudWatch Logs logging options for the delivery stream.
- Cluster
Endpoint string - The endpoint to use when communicating with the cluster. Specify either this
ClusterEndpoint
or theDomainARN
field. - Document
Id DeliveryOptions Stream Document Id Options - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- Domain
Arn string The ARN of the Amazon ES domain. The IAM role must have permissions for
DescribeElasticsearchDomain
,DescribeElasticsearchDomains
, andDescribeElasticsearchDomainConfig
after assuming the role specified in RoleARN .Specify either
ClusterEndpoint
orDomainARN
.- Index
Rotation DeliveryPeriod Stream Elasticsearch Destination Configuration Index Rotation Period - The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- Processing
Configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- Retry
Options DeliveryStream Elasticsearch Retry Options - The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- S3Backup
Mode DeliveryStream Elasticsearch Destination Configuration S3Backup Mode - The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the
S3BackupMode
content for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - Type
Name string - The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- Vpc
Configuration DeliveryStream Vpc Configuration - The details of the VPC of the Amazon ES destination.
- index
Name String - The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- role
Arn String - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- s3Configuration
Delivery
Stream S3Destination Configuration - The S3 bucket where Kinesis Data Firehose backs up incoming data.
- buffering
Hints DeliveryStream Elasticsearch Buffering Hints - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The Amazon CloudWatch Logs logging options for the delivery stream.
- cluster
Endpoint String - The endpoint to use when communicating with the cluster. Specify either this
ClusterEndpoint
or theDomainARN
field. - document
Id DeliveryOptions Stream Document Id Options - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain
Arn String The ARN of the Amazon ES domain. The IAM role must have permissions for
DescribeElasticsearchDomain
,DescribeElasticsearchDomains
, andDescribeElasticsearchDomainConfig
after assuming the role specified in RoleARN .Specify either
ClusterEndpoint
orDomainARN
.- index
Rotation DeliveryPeriod Stream Elasticsearch Destination Configuration Index Rotation Period - The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- processing
Configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry
Options DeliveryStream Elasticsearch Retry Options - The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- s3Backup
Mode DeliveryStream Elasticsearch Destination Configuration S3Backup Mode - The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the
S3BackupMode
content for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - type
Name String - The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- vpc
Configuration DeliveryStream Vpc Configuration - The details of the VPC of the Amazon ES destination.
- index
Name string - The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- role
Arn string - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- s3Configuration
Delivery
Stream S3Destination Configuration - The S3 bucket where Kinesis Data Firehose backs up incoming data.
- buffering
Hints DeliveryStream Elasticsearch Buffering Hints - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The Amazon CloudWatch Logs logging options for the delivery stream.
- cluster
Endpoint string - The endpoint to use when communicating with the cluster. Specify either this
ClusterEndpoint
or theDomainARN
field. - document
Id DeliveryOptions Stream Document Id Options - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain
Arn string The ARN of the Amazon ES domain. The IAM role must have permissions for
DescribeElasticsearchDomain
,DescribeElasticsearchDomains
, andDescribeElasticsearchDomainConfig
after assuming the role specified in RoleARN .Specify either
ClusterEndpoint
orDomainARN
.- index
Rotation DeliveryPeriod Stream Elasticsearch Destination Configuration Index Rotation Period - The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- processing
Configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry
Options DeliveryStream Elasticsearch Retry Options - The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- s3Backup
Mode DeliveryStream Elasticsearch Destination Configuration S3Backup Mode - The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the
S3BackupMode
content for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - type
Name string - The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- vpc
Configuration DeliveryStream Vpc Configuration - The details of the VPC of the Amazon ES destination.
- index_
name str - The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- role_
arn str - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- s3_
configuration DeliveryStream S3Destination Configuration - The S3 bucket where Kinesis Data Firehose backs up incoming data.
- buffering_
hints DeliveryStream Elasticsearch Buffering Hints - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- cloud_
watch_ Deliverylogging_ options Stream Cloud Watch Logging Options - The Amazon CloudWatch Logs logging options for the delivery stream.
- cluster_
endpoint str - The endpoint to use when communicating with the cluster. Specify either this
ClusterEndpoint
or theDomainARN
field. - document_
id_ Deliveryoptions Stream Document Id Options - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain_
arn str The ARN of the Amazon ES domain. The IAM role must have permissions for
DescribeElasticsearchDomain
,DescribeElasticsearchDomains
, andDescribeElasticsearchDomainConfig
after assuming the role specified in RoleARN .Specify either
ClusterEndpoint
orDomainARN
.- index_
rotation_ Deliveryperiod Stream Elasticsearch Destination Configuration Index Rotation Period - The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- processing_
configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry_
options DeliveryStream Elasticsearch Retry Options - The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- s3_
backup_ Deliverymode Stream Elasticsearch Destination Configuration S3Backup Mode - The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the
S3BackupMode
content for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - type_
name str - The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- vpc_
configuration DeliveryStream Vpc Configuration - The details of the VPC of the Amazon ES destination.
- index
Name String - The name of the Elasticsearch index to which Kinesis Data Firehose adds data for indexing.
- role
Arn String - The Amazon Resource Name (ARN) of the IAM role to be assumed by Kinesis Data Firehose for calling the Amazon ES Configuration API and for indexing documents. For more information, see Controlling Access with Amazon Kinesis Data Firehose .
- s3Configuration Property Map
- The S3 bucket where Kinesis Data Firehose backs up incoming data.
- buffering
Hints Property Map - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon ES domain.
- cloud
Watch Property MapLogging Options - The Amazon CloudWatch Logs logging options for the delivery stream.
- cluster
Endpoint String - The endpoint to use when communicating with the cluster. Specify either this
ClusterEndpoint
or theDomainARN
field. - document
Id Property MapOptions - Indicates the method for setting up document ID. The supported methods are Firehose generated document ID and OpenSearch Service generated document ID.
- domain
Arn String The ARN of the Amazon ES domain. The IAM role must have permissions for
DescribeElasticsearchDomain
,DescribeElasticsearchDomains
, andDescribeElasticsearchDomainConfig
after assuming the role specified in RoleARN .Specify either
ClusterEndpoint
orDomainARN
.- index
Rotation "NoPeriod Rotation" | "One Hour" | "One Day" | "One Week" | "One Month" - The frequency of Elasticsearch index rotation. If you enable index rotation, Kinesis Data Firehose appends a portion of the UTC arrival timestamp to the specified index name, and rotates the appended timestamp accordingly. For more information, see Index Rotation for the Amazon ES Destination in the Amazon Kinesis Data Firehose Developer Guide .
- processing
Configuration Property Map - The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry
Options Property Map - The retry behavior when Kinesis Data Firehose is unable to deliver data to Amazon ES.
- s3Backup
Mode "FailedDocuments Only" | "All Documents" - The condition under which Kinesis Data Firehose delivers data to Amazon Simple Storage Service (Amazon S3). You can send Amazon S3 all documents (all data) or only the documents that Kinesis Data Firehose could not deliver to the Amazon ES destination. For more information and valid values, see the
S3BackupMode
content for the ElasticsearchDestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - type
Name String - The Elasticsearch type name that Amazon ES adds to documents when indexing data.
- vpc
Configuration Property Map - The details of the VPC of the Amazon ES destination.
DeliveryStreamElasticsearchDestinationConfigurationIndexRotationPeriod, DeliveryStreamElasticsearchDestinationConfigurationIndexRotationPeriodArgs
- No
Rotation - NoRotation
- One
Hour - OneHour
- One
Day - OneDay
- One
Week - OneWeek
- One
Month - OneMonth
- Delivery
Stream Elasticsearch Destination Configuration Index Rotation Period No Rotation - NoRotation
- Delivery
Stream Elasticsearch Destination Configuration Index Rotation Period One Hour - OneHour
- Delivery
Stream Elasticsearch Destination Configuration Index Rotation Period One Day - OneDay
- Delivery
Stream Elasticsearch Destination Configuration Index Rotation Period One Week - OneWeek
- Delivery
Stream Elasticsearch Destination Configuration Index Rotation Period One Month - OneMonth
- No
Rotation - NoRotation
- One
Hour - OneHour
- One
Day - OneDay
- One
Week - OneWeek
- One
Month - OneMonth
- No
Rotation - NoRotation
- One
Hour - OneHour
- One
Day - OneDay
- One
Week - OneWeek
- One
Month - OneMonth
- NO_ROTATION
- NoRotation
- ONE_HOUR
- OneHour
- ONE_DAY
- OneDay
- ONE_WEEK
- OneWeek
- ONE_MONTH
- OneMonth
- "No
Rotation" - NoRotation
- "One
Hour" - OneHour
- "One
Day" - OneDay
- "One
Week" - OneWeek
- "One
Month" - OneMonth
DeliveryStreamElasticsearchDestinationConfigurationS3BackupMode, DeliveryStreamElasticsearchDestinationConfigurationS3BackupModeArgs
- Failed
Documents Only - FailedDocumentsOnly
- All
Documents - AllDocuments
- Delivery
Stream Elasticsearch Destination Configuration S3Backup Mode Failed Documents Only - FailedDocumentsOnly
- Delivery
Stream Elasticsearch Destination Configuration S3Backup Mode All Documents - AllDocuments
- Failed
Documents Only - FailedDocumentsOnly
- All
Documents - AllDocuments
- Failed
Documents Only - FailedDocumentsOnly
- All
Documents - AllDocuments
- FAILED_DOCUMENTS_ONLY
- FailedDocumentsOnly
- ALL_DOCUMENTS
- AllDocuments
- "Failed
Documents Only" - FailedDocumentsOnly
- "All
Documents" - AllDocuments
DeliveryStreamElasticsearchRetryOptions, DeliveryStreamElasticsearchRetryOptionsArgs
- Duration
In intSeconds - After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the
DurationInSeconds
content for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
- Duration
In intSeconds - After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the
DurationInSeconds
content for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
- duration
In IntegerSeconds - After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the
DurationInSeconds
content for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
- duration
In numberSeconds - After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the
DurationInSeconds
content for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
- duration_
in_ intseconds - After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the
DurationInSeconds
content for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
- duration
In NumberSeconds - After an initial failure to deliver to Amazon ES, the total amount of time during which Kinesis Data Firehose re-attempts delivery (including the first attempt). If Kinesis Data Firehose can't deliver the data within the specified time, it writes the data to the backup S3 bucket. For valid values, see the
DurationInSeconds
content for the ElasticsearchRetryOptions data type in the Amazon Kinesis Data Firehose API Reference .
DeliveryStreamEncryptionConfiguration, DeliveryStreamEncryptionConfigurationArgs
- Kms
Encryption Pulumi.Config Aws Native. Kinesis Firehose. Inputs. Delivery Stream Kms Encryption Config - The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- No
Encryption Pulumi.Config Aws Native. Kinesis Firehose. Delivery Stream Encryption Configuration No Encryption Config - Disables encryption. For valid values, see the
NoEncryptionConfig
content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- Kms
Encryption DeliveryConfig Stream Kms Encryption Config - The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- No
Encryption DeliveryConfig Stream Encryption Configuration No Encryption Config - Disables encryption. For valid values, see the
NoEncryptionConfig
content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- kms
Encryption DeliveryConfig Stream Kms Encryption Config - The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- no
Encryption DeliveryConfig Stream Encryption Configuration No Encryption Config - Disables encryption. For valid values, see the
NoEncryptionConfig
content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- kms
Encryption DeliveryConfig Stream Kms Encryption Config - The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- no
Encryption DeliveryConfig Stream Encryption Configuration No Encryption Config - Disables encryption. For valid values, see the
NoEncryptionConfig
content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- kms_
encryption_ Deliveryconfig Stream Kms Encryption Config - The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- no_
encryption_ Deliveryconfig Stream Encryption Configuration No Encryption Config - Disables encryption. For valid values, see the
NoEncryptionConfig
content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
- kms
Encryption Property MapConfig - The AWS Key Management Service ( AWS KMS) encryption key that Amazon S3 uses to encrypt your data.
- no
Encryption "NoConfig Encryption" - Disables encryption. For valid values, see the
NoEncryptionConfig
content for the EncryptionConfiguration data type in the Amazon Kinesis Data Firehose API Reference .
DeliveryStreamEncryptionConfigurationInput, DeliveryStreamEncryptionConfigurationInputArgs
- Key
Type Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Encryption Configuration Input Key Type Indicates the type of customer master key (CMK) to use for encryption. The default setting is
AWS_OWNED_CMK
. For more information about CMKs, see Customer Master Keys (CMKs) .You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.
To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.
- Key
Arn string - If you set
KeyType
toCUSTOMER_MANAGED_CMK
, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyType
toAWS _OWNED_CMK
, Firehose uses a service-account CMK.
- Key
Type DeliveryStream Encryption Configuration Input Key Type Indicates the type of customer master key (CMK) to use for encryption. The default setting is
AWS_OWNED_CMK
. For more information about CMKs, see Customer Master Keys (CMKs) .You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.
To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.
- Key
Arn string - If you set
KeyType
toCUSTOMER_MANAGED_CMK
, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyType
toAWS _OWNED_CMK
, Firehose uses a service-account CMK.
- key
Type DeliveryStream Encryption Configuration Input Key Type Indicates the type of customer master key (CMK) to use for encryption. The default setting is
AWS_OWNED_CMK
. For more information about CMKs, see Customer Master Keys (CMKs) .You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.
To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.
- key
Arn String - If you set
KeyType
toCUSTOMER_MANAGED_CMK
, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyType
toAWS _OWNED_CMK
, Firehose uses a service-account CMK.
- key
Type DeliveryStream Encryption Configuration Input Key Type Indicates the type of customer master key (CMK) to use for encryption. The default setting is
AWS_OWNED_CMK
. For more information about CMKs, see Customer Master Keys (CMKs) .You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.
To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.
- key
Arn string - If you set
KeyType
toCUSTOMER_MANAGED_CMK
, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyType
toAWS _OWNED_CMK
, Firehose uses a service-account CMK.
- key_
type DeliveryStream Encryption Configuration Input Key Type Indicates the type of customer master key (CMK) to use for encryption. The default setting is
AWS_OWNED_CMK
. For more information about CMKs, see Customer Master Keys (CMKs) .You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.
To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.
- key_
arn str - If you set
KeyType
toCUSTOMER_MANAGED_CMK
, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyType
toAWS _OWNED_CMK
, Firehose uses a service-account CMK.
- key
Type "AWS_OWNED_CMK" | "CUSTOMER_MANAGED_CMK" Indicates the type of customer master key (CMK) to use for encryption. The default setting is
AWS_OWNED_CMK
. For more information about CMKs, see Customer Master Keys (CMKs) .You can use a CMK of type CUSTOMER_MANAGED_CMK to encrypt up to 500 delivery streams.
To encrypt your delivery stream, use symmetric CMKs. Kinesis Data Firehose doesn't support asymmetric CMKs. For information about symmetric and asymmetric CMKs, see About Symmetric and Asymmetric CMKs in the AWS Key Management Service developer guide.
- key
Arn String - If you set
KeyType
toCUSTOMER_MANAGED_CMK
, you must specify the Amazon Resource Name (ARN) of the CMK. If you setKeyType
toAWS _OWNED_CMK
, Firehose uses a service-account CMK.
DeliveryStreamEncryptionConfigurationInputKeyType, DeliveryStreamEncryptionConfigurationInputKeyTypeArgs
- Aws
Owned Cmk - AWS_OWNED_CMK
- Customer
Managed Cmk - CUSTOMER_MANAGED_CMK
- Delivery
Stream Encryption Configuration Input Key Type Aws Owned Cmk - AWS_OWNED_CMK
- Delivery
Stream Encryption Configuration Input Key Type Customer Managed Cmk - CUSTOMER_MANAGED_CMK
- Aws
Owned Cmk - AWS_OWNED_CMK
- Customer
Managed Cmk - CUSTOMER_MANAGED_CMK
- Aws
Owned Cmk - AWS_OWNED_CMK
- Customer
Managed Cmk - CUSTOMER_MANAGED_CMK
- AWS_OWNED_CMK
- AWS_OWNED_CMK
- CUSTOMER_MANAGED_CMK
- CUSTOMER_MANAGED_CMK
- "AWS_OWNED_CMK"
- AWS_OWNED_CMK
- "CUSTOMER_MANAGED_CMK"
- CUSTOMER_MANAGED_CMK
DeliveryStreamEncryptionConfigurationNoEncryptionConfig, DeliveryStreamEncryptionConfigurationNoEncryptionConfigArgs
- No
Encryption - NoEncryption
- Delivery
Stream Encryption Configuration No Encryption Config No Encryption - NoEncryption
- No
Encryption - NoEncryption
- No
Encryption - NoEncryption
- NO_ENCRYPTION
- NoEncryption
- "No
Encryption" - NoEncryption
DeliveryStreamExtendedS3DestinationConfiguration, DeliveryStreamExtendedS3DestinationConfigurationArgs
- Bucket
Arn string - The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- Role
Arn string - The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- Buffering
Hints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Buffering Hints - The buffering option.
- Cloud
Watch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options - The Amazon CloudWatch logging options for your delivery stream.
- Compression
Format Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Extended S3Destination Configuration Compression Format - The compression format. If no value is specified, the default is
UNCOMPRESSED
. - Custom
Time stringZone - The time zone you prefer. UTC is the default.
- Data
Format Pulumi.Conversion Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Data Format Conversion Configuration - The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- Dynamic
Partitioning Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Dynamic Partitioning Configuration - The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- Encryption
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Encryption Configuration - The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is
NoEncryption
. - Error
Output stringPrefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- File
Extension string - Specify a file extension. It will override the default file extension
- Prefix string
- The
YYYY/MM/DD/HH
time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference . - Processing
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- S3Backup
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration - The configuration for backup in Amazon S3.
- S3Backup
Mode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Extended S3Destination Configuration S3Backup Mode - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
- Bucket
Arn string - The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- Role
Arn string - The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- Buffering
Hints DeliveryStream Buffering Hints - The buffering option.
- Cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The Amazon CloudWatch logging options for your delivery stream.
- Compression
Format DeliveryStream Extended S3Destination Configuration Compression Format - The compression format. If no value is specified, the default is
UNCOMPRESSED
. - Custom
Time stringZone - The time zone you prefer. UTC is the default.
- Data
Format DeliveryConversion Configuration Stream Data Format Conversion Configuration - The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- Dynamic
Partitioning DeliveryConfiguration Stream Dynamic Partitioning Configuration - The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- Encryption
Configuration DeliveryStream Encryption Configuration - The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is
NoEncryption
. - Error
Output stringPrefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- File
Extension string - Specify a file extension. It will override the default file extension
- Prefix string
- The
YYYY/MM/DD/HH
time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference . - Processing
Configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- S3Backup
Configuration DeliveryStream S3Destination Configuration - The configuration for backup in Amazon S3.
- S3Backup
Mode DeliveryStream Extended S3Destination Configuration S3Backup Mode - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
- bucket
Arn String - The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- role
Arn String - The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- buffering
Hints DeliveryStream Buffering Hints - The buffering option.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The Amazon CloudWatch logging options for your delivery stream.
- compression
Format DeliveryStream Extended S3Destination Configuration Compression Format - The compression format. If no value is specified, the default is
UNCOMPRESSED
. - custom
Time StringZone - The time zone you prefer. UTC is the default.
- data
Format DeliveryConversion Configuration Stream Data Format Conversion Configuration - The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- dynamic
Partitioning DeliveryConfiguration Stream Dynamic Partitioning Configuration - The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- encryption
Configuration DeliveryStream Encryption Configuration - The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is
NoEncryption
. - error
Output StringPrefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- file
Extension String - Specify a file extension. It will override the default file extension
- prefix String
- The
YYYY/MM/DD/HH
time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference . - processing
Configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- s3Backup
Configuration DeliveryStream S3Destination Configuration - The configuration for backup in Amazon S3.
- s3Backup
Mode DeliveryStream Extended S3Destination Configuration S3Backup Mode - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
- bucket
Arn string - The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- role
Arn string - The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- buffering
Hints DeliveryStream Buffering Hints - The buffering option.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The Amazon CloudWatch logging options for your delivery stream.
- compression
Format DeliveryStream Extended S3Destination Configuration Compression Format - The compression format. If no value is specified, the default is
UNCOMPRESSED
. - custom
Time stringZone - The time zone you prefer. UTC is the default.
- data
Format DeliveryConversion Configuration Stream Data Format Conversion Configuration - The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- dynamic
Partitioning DeliveryConfiguration Stream Dynamic Partitioning Configuration - The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- encryption
Configuration DeliveryStream Encryption Configuration - The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is
NoEncryption
. - error
Output stringPrefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- file
Extension string - Specify a file extension. It will override the default file extension
- prefix string
- The
YYYY/MM/DD/HH
time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference . - processing
Configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- s3Backup
Configuration DeliveryStream S3Destination Configuration - The configuration for backup in Amazon S3.
- s3Backup
Mode DeliveryStream Extended S3Destination Configuration S3Backup Mode - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
- bucket_
arn str - The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- role_
arn str - The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- buffering_
hints DeliveryStream Buffering Hints - The buffering option.
- cloud_
watch_ Deliverylogging_ options Stream Cloud Watch Logging Options - The Amazon CloudWatch logging options for your delivery stream.
- compression_
format DeliveryStream Extended S3Destination Configuration Compression Format - The compression format. If no value is specified, the default is
UNCOMPRESSED
. - custom_
time_ strzone - The time zone you prefer. UTC is the default.
- data_
format_ Deliveryconversion_ configuration Stream Data Format Conversion Configuration - The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- dynamic_
partitioning_ Deliveryconfiguration Stream Dynamic Partitioning Configuration - The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- encryption_
configuration DeliveryStream Encryption Configuration - The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is
NoEncryption
. - error_
output_ strprefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- file_
extension str - Specify a file extension. It will override the default file extension
- prefix str
- The
YYYY/MM/DD/HH
time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference . - processing_
configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- s3_
backup_ Deliveryconfiguration Stream S3Destination Configuration - The configuration for backup in Amazon S3.
- s3_
backup_ Deliverymode Stream Extended S3Destination Configuration S3Backup Mode - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
- bucket
Arn String - The Amazon Resource Name (ARN) of the Amazon S3 bucket. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- role
Arn String - The Amazon Resource Name (ARN) of the AWS credentials. For constraints, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference .
- buffering
Hints Property Map - The buffering option.
- cloud
Watch Property MapLogging Options - The Amazon CloudWatch logging options for your delivery stream.
- compression
Format "UNCOMPRESSED" | "GZIP" | "ZIP" | "Snappy" | "HADOOP_SNAPPY" - The compression format. If no value is specified, the default is
UNCOMPRESSED
. - custom
Time StringZone - The time zone you prefer. UTC is the default.
- data
Format Property MapConversion Configuration - The serializer, deserializer, and schema for converting data from the JSON format to the Parquet or ORC format before writing it to Amazon S3.
- dynamic
Partitioning Property MapConfiguration - The configuration of the dynamic partitioning mechanism that creates targeted data sets from the streaming data by partitioning it based on partition keys.
- encryption
Configuration Property Map - The encryption configuration for the Kinesis Data Firehose delivery stream. The default value is
NoEncryption
. - error
Output StringPrefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- file
Extension String - Specify a file extension. It will override the default file extension
- prefix String
- The
YYYY/MM/DD/HH
time format prefix is automatically used for delivered Amazon S3 files. For more information, see ExtendedS3DestinationConfiguration in the Amazon Kinesis Data Firehose API Reference . - processing
Configuration Property Map - The data processing configuration for the Kinesis Data Firehose delivery stream.
- s3Backup
Configuration Property Map - The configuration for backup in Amazon S3.
- s3Backup
Mode "Disabled" | "Enabled" - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
DeliveryStreamExtendedS3DestinationConfigurationCompressionFormat, DeliveryStreamExtendedS3DestinationConfigurationCompressionFormatArgs
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- Hadoop
Snappy - HADOOP_SNAPPY
- Delivery
Stream Extended S3Destination Configuration Compression Format Uncompressed - UNCOMPRESSED
- Delivery
Stream Extended S3Destination Configuration Compression Format Gzip - GZIP
- Delivery
Stream Extended S3Destination Configuration Compression Format Zip - ZIP
- Delivery
Stream Extended S3Destination Configuration Compression Format Snappy - Snappy
- Delivery
Stream Extended S3Destination Configuration Compression Format Hadoop Snappy - HADOOP_SNAPPY
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- Hadoop
Snappy - HADOOP_SNAPPY
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- Hadoop
Snappy - HADOOP_SNAPPY
- UNCOMPRESSED
- UNCOMPRESSED
- GZIP
- GZIP
- ZIP
- ZIP
- SNAPPY
- Snappy
- HADOOP_SNAPPY
- HADOOP_SNAPPY
- "UNCOMPRESSED"
- UNCOMPRESSED
- "GZIP"
- GZIP
- "ZIP"
- ZIP
- "Snappy"
- Snappy
- "HADOOP_SNAPPY"
- HADOOP_SNAPPY
DeliveryStreamExtendedS3DestinationConfigurationS3BackupMode, DeliveryStreamExtendedS3DestinationConfigurationS3BackupModeArgs
- Disabled
- Disabled
- Enabled
- Enabled
- Delivery
Stream Extended S3Destination Configuration S3Backup Mode Disabled - Disabled
- Delivery
Stream Extended S3Destination Configuration S3Backup Mode Enabled - Enabled
- Disabled
- Disabled
- Enabled
- Enabled
- Disabled
- Disabled
- Enabled
- Enabled
- DISABLED
- Disabled
- ENABLED
- Enabled
- "Disabled"
- Disabled
- "Enabled"
- Enabled
DeliveryStreamHiveJsonSerDe, DeliveryStreamHiveJsonSerDeArgs
- Timestamp
Formats List<string> - Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value
millis
to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOf
by default.
- Timestamp
Formats []string - Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value
millis
to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOf
by default.
- timestamp
Formats List<String> - Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value
millis
to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOf
by default.
- timestamp
Formats string[] - Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value
millis
to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOf
by default.
- timestamp_
formats Sequence[str] - Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value
millis
to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOf
by default.
- timestamp
Formats List<String> - Indicates how you want Firehose to parse the date and timestamps that may be present in your input data JSON. To specify these format strings, follow the pattern syntax of JodaTime's DateTimeFormat format strings. For more information, see Class DateTimeFormat . You can also use the special value
millis
to parse timestamps in epoch milliseconds. If you don't specify a format, Firehose usesjava.sql.Timestamp::valueOf
by default.
DeliveryStreamHttpEndpointCommonAttribute, DeliveryStreamHttpEndpointCommonAttributeArgs
- Attribute
Name string - The name of the HTTP endpoint common attribute.
- Attribute
Value string - The value of the HTTP endpoint common attribute.
- Attribute
Name string - The name of the HTTP endpoint common attribute.
- Attribute
Value string - The value of the HTTP endpoint common attribute.
- attribute
Name String - The name of the HTTP endpoint common attribute.
- attribute
Value String - The value of the HTTP endpoint common attribute.
- attribute
Name string - The name of the HTTP endpoint common attribute.
- attribute
Value string - The value of the HTTP endpoint common attribute.
- attribute_
name str - The name of the HTTP endpoint common attribute.
- attribute_
value str - The value of the HTTP endpoint common attribute.
- attribute
Name String - The name of the HTTP endpoint common attribute.
- attribute
Value String - The value of the HTTP endpoint common attribute.
DeliveryStreamHttpEndpointConfiguration, DeliveryStreamHttpEndpointConfigurationArgs
- url str
- The URL of the HTTP endpoint selected as the destination.
- access_
key str - The access key required for Kinesis Firehose to authenticate with the HTTP endpoint selected as the destination.
- name str
- The name of the HTTP endpoint selected as the destination.
DeliveryStreamHttpEndpointDestinationConfiguration, DeliveryStreamHttpEndpointDestinationConfigurationArgs
- Endpoint
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Http Endpoint Configuration - The configuration of the HTTP endpoint selected as the destination.
- S3Configuration
Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration - Describes the configuration of a destination in Amazon S3.
- Buffering
Hints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Buffering Hints - The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- Cloud
Watch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- Processing
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration - Describes the data processing configuration.
- Request
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Http Endpoint Request Configuration - The configuration of the request sent to the HTTP endpoint specified as the destination.
- Retry
Options Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Retry Options - Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- Role
Arn string - Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- S3Backup
Mode string - Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- Secrets
Manager Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Secrets Manager Configuration - The configuration that defines how you access secrets for HTTP Endpoint destination.
- Endpoint
Configuration DeliveryStream Http Endpoint Configuration - The configuration of the HTTP endpoint selected as the destination.
- S3Configuration
Delivery
Stream S3Destination Configuration - Describes the configuration of a destination in Amazon S3.
- Buffering
Hints DeliveryStream Buffering Hints - The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- Cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- Processing
Configuration DeliveryStream Processing Configuration - Describes the data processing configuration.
- Request
Configuration DeliveryStream Http Endpoint Request Configuration - The configuration of the request sent to the HTTP endpoint specified as the destination.
- Retry
Options DeliveryStream Retry Options - Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- Role
Arn string - Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- S3Backup
Mode string - Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- Secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for HTTP Endpoint destination.
- endpoint
Configuration DeliveryStream Http Endpoint Configuration - The configuration of the HTTP endpoint selected as the destination.
- s3Configuration
Delivery
Stream S3Destination Configuration - Describes the configuration of a destination in Amazon S3.
- buffering
Hints DeliveryStream Buffering Hints - The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- processing
Configuration DeliveryStream Processing Configuration - Describes the data processing configuration.
- request
Configuration DeliveryStream Http Endpoint Request Configuration - The configuration of the request sent to the HTTP endpoint specified as the destination.
- retry
Options DeliveryStream Retry Options - Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- role
Arn String - Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- s3Backup
Mode String - Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for HTTP Endpoint destination.
- endpoint
Configuration DeliveryStream Http Endpoint Configuration - The configuration of the HTTP endpoint selected as the destination.
- s3Configuration
Delivery
Stream S3Destination Configuration - Describes the configuration of a destination in Amazon S3.
- buffering
Hints DeliveryStream Buffering Hints - The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- processing
Configuration DeliveryStream Processing Configuration - Describes the data processing configuration.
- request
Configuration DeliveryStream Http Endpoint Request Configuration - The configuration of the request sent to the HTTP endpoint specified as the destination.
- retry
Options DeliveryStream Retry Options - Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- role
Arn string - Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- s3Backup
Mode string - Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for HTTP Endpoint destination.
- endpoint_
configuration DeliveryStream Http Endpoint Configuration - The configuration of the HTTP endpoint selected as the destination.
- s3_
configuration DeliveryStream S3Destination Configuration - Describes the configuration of a destination in Amazon S3.
- buffering_
hints DeliveryStream Buffering Hints - The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- cloud_
watch_ Deliverylogging_ options Stream Cloud Watch Logging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- processing_
configuration DeliveryStream Processing Configuration - Describes the data processing configuration.
- request_
configuration DeliveryStream Http Endpoint Request Configuration - The configuration of the request sent to the HTTP endpoint specified as the destination.
- retry_
options DeliveryStream Retry Options - Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- role_
arn str - Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- s3_
backup_ strmode - Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- secrets_
manager_ Deliveryconfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for HTTP Endpoint destination.
- endpoint
Configuration Property Map - The configuration of the HTTP endpoint selected as the destination.
- s3Configuration Property Map
- Describes the configuration of a destination in Amazon S3.
- buffering
Hints Property Map - The buffering options that can be used before data is delivered to the specified destination. Kinesis Data Firehose treats these options as hints, and it might choose to use more optimal values. The SizeInMBs and IntervalInSeconds parameters are optional. However, if you specify a value for one of them, you must also provide a value for the other.
- cloud
Watch Property MapLogging Options - Describes the Amazon CloudWatch logging options for your delivery stream.
- processing
Configuration Property Map - Describes the data processing configuration.
- request
Configuration Property Map - The configuration of the request sent to the HTTP endpoint specified as the destination.
- retry
Options Property Map - Describes the retry behavior in case Kinesis Data Firehose is unable to deliver data to the specified HTTP endpoint destination, or if it doesn't receive a valid acknowledgment of receipt from the specified HTTP endpoint destination.
- role
Arn String - Kinesis Data Firehose uses this IAM role for all the permissions that the delivery stream needs.
- s3Backup
Mode String - Describes the S3 bucket backup options for the data that Kinesis Data Firehose delivers to the HTTP endpoint destination. You can back up all documents (AllData) or only the documents that Kinesis Data Firehose could not deliver to the specified HTTP endpoint destination (FailedDataOnly).
- secrets
Manager Property MapConfiguration - The configuration that defines how you access secrets for HTTP Endpoint destination.
DeliveryStreamHttpEndpointRequestConfiguration, DeliveryStreamHttpEndpointRequestConfigurationArgs
- Common
Attributes List<Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Http Endpoint Common Attribute> - Describes the metadata sent to the HTTP endpoint destination.
- Content
Encoding Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Http Endpoint Request Configuration Content Encoding - Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
- Common
Attributes []DeliveryStream Http Endpoint Common Attribute - Describes the metadata sent to the HTTP endpoint destination.
- Content
Encoding DeliveryStream Http Endpoint Request Configuration Content Encoding - Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
- common
Attributes List<DeliveryStream Http Endpoint Common Attribute> - Describes the metadata sent to the HTTP endpoint destination.
- content
Encoding DeliveryStream Http Endpoint Request Configuration Content Encoding - Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
- common
Attributes DeliveryStream Http Endpoint Common Attribute[] - Describes the metadata sent to the HTTP endpoint destination.
- content
Encoding DeliveryStream Http Endpoint Request Configuration Content Encoding - Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
- common_
attributes Sequence[DeliveryStream Http Endpoint Common Attribute] - Describes the metadata sent to the HTTP endpoint destination.
- content_
encoding DeliveryStream Http Endpoint Request Configuration Content Encoding - Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
- common
Attributes List<Property Map> - Describes the metadata sent to the HTTP endpoint destination.
- content
Encoding "NONE" | "GZIP" - Kinesis Data Firehose uses the content encoding to compress the body of a request before sending the request to the destination. For more information, see Content-Encoding in MDN Web Docs, the official Mozilla documentation.
DeliveryStreamHttpEndpointRequestConfigurationContentEncoding, DeliveryStreamHttpEndpointRequestConfigurationContentEncodingArgs
- None
- NONE
- Gzip
- GZIP
- Delivery
Stream Http Endpoint Request Configuration Content Encoding None - NONE
- Delivery
Stream Http Endpoint Request Configuration Content Encoding Gzip - GZIP
- None
- NONE
- Gzip
- GZIP
- None
- NONE
- Gzip
- GZIP
- NONE
- NONE
- GZIP
- GZIP
- "NONE"
- NONE
- "GZIP"
- GZIP
DeliveryStreamIcebergDestinationConfiguration, DeliveryStreamIcebergDestinationConfigurationArgs
- Catalog
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Catalog Configuration Configuration describing where the destination Apache Iceberg Tables are persisted.
Amazon Data Firehose is in preview release and is subject to change.
- Role
Arn string The Amazon Resource Name (ARN) of the the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
Amazon Data Firehose is in preview release and is subject to change.
- S3Configuration
Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration - Buffering
Hints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Buffering Hints - Cloud
Watch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options - Destination
Table List<Pulumi.Configuration List Aws Native. Kinesis Firehose. Inputs. Delivery Stream Destination Table Configuration> Provides a list of
DestinationTableConfigurations
which Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.Amazon Data Firehose is in preview release and is subject to change.
- Processing
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration - Retry
Options Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Retry Options - S3Backup
Mode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Iceberg Destination Configurations3Backup Mode Describes how Firehose will backup records. Currently,S3 backup only supports
FailedDataOnly
for preview.Amazon Data Firehose is in preview release and is subject to change.
- Catalog
Configuration DeliveryStream Catalog Configuration Configuration describing where the destination Apache Iceberg Tables are persisted.
Amazon Data Firehose is in preview release and is subject to change.
- Role
Arn string The Amazon Resource Name (ARN) of the the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
Amazon Data Firehose is in preview release and is subject to change.
- S3Configuration
Delivery
Stream S3Destination Configuration - Buffering
Hints DeliveryStream Buffering Hints - Cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - Destination
Table []DeliveryConfiguration List Stream Destination Table Configuration Provides a list of
DestinationTableConfigurations
which Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.Amazon Data Firehose is in preview release and is subject to change.
- Processing
Configuration DeliveryStream Processing Configuration - Retry
Options DeliveryStream Retry Options - S3Backup
Mode DeliveryStream Iceberg Destination Configurations3Backup Mode Describes how Firehose will backup records. Currently,S3 backup only supports
FailedDataOnly
for preview.Amazon Data Firehose is in preview release and is subject to change.
- catalog
Configuration DeliveryStream Catalog Configuration Configuration describing where the destination Apache Iceberg Tables are persisted.
Amazon Data Firehose is in preview release and is subject to change.
- role
Arn String The Amazon Resource Name (ARN) of the the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
Amazon Data Firehose is in preview release and is subject to change.
- s3Configuration
Delivery
Stream S3Destination Configuration - buffering
Hints DeliveryStream Buffering Hints - cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - destination
Table List<DeliveryConfiguration List Stream Destination Table Configuration> Provides a list of
DestinationTableConfigurations
which Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.Amazon Data Firehose is in preview release and is subject to change.
- processing
Configuration DeliveryStream Processing Configuration - retry
Options DeliveryStream Retry Options - s3Backup
Mode DeliveryStream Iceberg Destination Configurations3Backup Mode Describes how Firehose will backup records. Currently,S3 backup only supports
FailedDataOnly
for preview.Amazon Data Firehose is in preview release and is subject to change.
- catalog
Configuration DeliveryStream Catalog Configuration Configuration describing where the destination Apache Iceberg Tables are persisted.
Amazon Data Firehose is in preview release and is subject to change.
- role
Arn string The Amazon Resource Name (ARN) of the the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
Amazon Data Firehose is in preview release and is subject to change.
- s3Configuration
Delivery
Stream S3Destination Configuration - buffering
Hints DeliveryStream Buffering Hints - cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - destination
Table DeliveryConfiguration List Stream Destination Table Configuration[] Provides a list of
DestinationTableConfigurations
which Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.Amazon Data Firehose is in preview release and is subject to change.
- processing
Configuration DeliveryStream Processing Configuration - retry
Options DeliveryStream Retry Options - s3Backup
Mode DeliveryStream Iceberg Destination Configurations3Backup Mode Describes how Firehose will backup records. Currently,S3 backup only supports
FailedDataOnly
for preview.Amazon Data Firehose is in preview release and is subject to change.
- catalog_
configuration DeliveryStream Catalog Configuration Configuration describing where the destination Apache Iceberg Tables are persisted.
Amazon Data Firehose is in preview release and is subject to change.
- role_
arn str The Amazon Resource Name (ARN) of the the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
Amazon Data Firehose is in preview release and is subject to change.
- s3_
configuration DeliveryStream S3Destination Configuration - buffering_
hints DeliveryStream Buffering Hints - cloud_
watch_ Deliverylogging_ options Stream Cloud Watch Logging Options - destination_
table_ Sequence[Deliveryconfiguration_ list Stream Destination Table Configuration] Provides a list of
DestinationTableConfigurations
which Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.Amazon Data Firehose is in preview release and is subject to change.
- processing_
configuration DeliveryStream Processing Configuration - retry_
options DeliveryStream Retry Options - s3_
backup_ Deliverymode Stream Iceberg Destination Configurations3Backup Mode Describes how Firehose will backup records. Currently,S3 backup only supports
FailedDataOnly
for preview.Amazon Data Firehose is in preview release and is subject to change.
- catalog
Configuration Property Map Configuration describing where the destination Apache Iceberg Tables are persisted.
Amazon Data Firehose is in preview release and is subject to change.
- role
Arn String The Amazon Resource Name (ARN) of the the IAM role to be assumed by Firehose for calling Apache Iceberg Tables.
Amazon Data Firehose is in preview release and is subject to change.
- s3Configuration Property Map
- buffering
Hints Property Map - cloud
Watch Property MapLogging Options - destination
Table List<Property Map>Configuration List Provides a list of
DestinationTableConfigurations
which Firehose uses to deliver data to Apache Iceberg Tables. Firehose will write data with insert if table specific configuration is not provided here.Amazon Data Firehose is in preview release and is subject to change.
- processing
Configuration Property Map - retry
Options Property Map - s3Backup
Mode "AllData" | "Failed Data Only" Describes how Firehose will backup records. Currently,S3 backup only supports
FailedDataOnly
for preview.Amazon Data Firehose is in preview release and is subject to change.
DeliveryStreamIcebergDestinationConfigurations3BackupMode, DeliveryStreamIcebergDestinationConfigurations3BackupModeArgs
- All
Data - AllData
- Failed
Data Only - FailedDataOnly
- Delivery
Stream Iceberg Destination Configurations3Backup Mode All Data - AllData
- Delivery
Stream Iceberg Destination Configurations3Backup Mode Failed Data Only - FailedDataOnly
- All
Data - AllData
- Failed
Data Only - FailedDataOnly
- All
Data - AllData
- Failed
Data Only - FailedDataOnly
- ALL_DATA
- AllData
- FAILED_DATA_ONLY
- FailedDataOnly
- "All
Data" - AllData
- "Failed
Data Only" - FailedDataOnly
DeliveryStreamInputFormatConfiguration, DeliveryStreamInputFormatConfigurationArgs
- Deserializer
Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream Deserializer - Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
- Deserializer
Delivery
Stream Deserializer - Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
- deserializer
Delivery
Stream Deserializer - Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
- deserializer
Delivery
Stream Deserializer - Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
- deserializer
Delivery
Stream Deserializer - Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
- deserializer Property Map
- Specifies which deserializer to use. You can choose either the Apache Hive JSON SerDe or the OpenX JSON SerDe. If both are non-null, the server rejects the request.
DeliveryStreamKinesisStreamSourceConfiguration, DeliveryStreamKinesisStreamSourceConfigurationArgs
- Kinesis
Stream stringArn - The ARN of the source Kinesis data stream.
- Role
Arn string - The ARN of the role that provides access to the source Kinesis data stream.
- Kinesis
Stream stringArn - The ARN of the source Kinesis data stream.
- Role
Arn string - The ARN of the role that provides access to the source Kinesis data stream.
- kinesis
Stream StringArn - The ARN of the source Kinesis data stream.
- role
Arn String - The ARN of the role that provides access to the source Kinesis data stream.
- kinesis
Stream stringArn - The ARN of the source Kinesis data stream.
- role
Arn string - The ARN of the role that provides access to the source Kinesis data stream.
- kinesis_
stream_ strarn - The ARN of the source Kinesis data stream.
- role_
arn str - The ARN of the role that provides access to the source Kinesis data stream.
- kinesis
Stream StringArn - The ARN of the source Kinesis data stream.
- role
Arn String - The ARN of the role that provides access to the source Kinesis data stream.
DeliveryStreamKmsEncryptionConfig, DeliveryStreamKmsEncryptionConfigArgs
- Awskms
Key stringArn - The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
- Awskms
Key stringArn - The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
- awskms
Key StringArn - The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
- awskms
Key stringArn - The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
- awskms_
key_ strarn - The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
- awskms
Key StringArn - The Amazon Resource Name (ARN) of the AWS KMS encryption key that Amazon S3 uses to encrypt data delivered by the Kinesis Data Firehose stream. The key must belong to the same region as the destination S3 bucket.
DeliveryStreamMskSourceConfiguration, DeliveryStreamMskSourceConfigurationArgs
- Authentication
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Authentication Configuration - The authentication configuration of the Amazon MSK cluster.
- Msk
Cluster stringArn - The ARN of the Amazon MSK cluster.
- Topic
Name string - The topic name within the Amazon MSK cluster.
- Read
From stringTimestamp The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active.
If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the
ReadFromTimestamp
parameter to Epoch (1970-01-01T00:00:00Z).
- Authentication
Configuration DeliveryStream Authentication Configuration - The authentication configuration of the Amazon MSK cluster.
- Msk
Cluster stringArn - The ARN of the Amazon MSK cluster.
- Topic
Name string - The topic name within the Amazon MSK cluster.
- Read
From stringTimestamp The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active.
If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the
ReadFromTimestamp
parameter to Epoch (1970-01-01T00:00:00Z).
- authentication
Configuration DeliveryStream Authentication Configuration - The authentication configuration of the Amazon MSK cluster.
- msk
Cluster StringArn - The ARN of the Amazon MSK cluster.
- topic
Name String - The topic name within the Amazon MSK cluster.
- read
From StringTimestamp The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active.
If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the
ReadFromTimestamp
parameter to Epoch (1970-01-01T00:00:00Z).
- authentication
Configuration DeliveryStream Authentication Configuration - The authentication configuration of the Amazon MSK cluster.
- msk
Cluster stringArn - The ARN of the Amazon MSK cluster.
- topic
Name string - The topic name within the Amazon MSK cluster.
- read
From stringTimestamp The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active.
If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the
ReadFromTimestamp
parameter to Epoch (1970-01-01T00:00:00Z).
- authentication_
configuration DeliveryStream Authentication Configuration - The authentication configuration of the Amazon MSK cluster.
- msk_
cluster_ strarn - The ARN of the Amazon MSK cluster.
- topic_
name str - The topic name within the Amazon MSK cluster.
- read_
from_ strtimestamp The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active.
If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the
ReadFromTimestamp
parameter to Epoch (1970-01-01T00:00:00Z).
- authentication
Configuration Property Map - The authentication configuration of the Amazon MSK cluster.
- msk
Cluster StringArn - The ARN of the Amazon MSK cluster.
- topic
Name String - The topic name within the Amazon MSK cluster.
- read
From StringTimestamp The start date and time in UTC for the offset position within your MSK topic from where Firehose begins to read. By default, this is set to timestamp when Firehose becomes Active.
If you want to create a Firehose stream with Earliest start position from SDK or CLI, you need to set the
ReadFromTimestamp
parameter to Epoch (1970-01-01T00:00:00Z).
DeliveryStreamOpenXJsonSerDe, DeliveryStreamOpenXJsonSerDeArgs
- Case
Insensitive bool - When set to
true
, which is the default, Firehose converts JSON keys to lowercase before deserializing them. - Column
To Dictionary<string, string>Json Key Mappings - Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example,
timestamp
is a Hive keyword. If you have a JSON key namedtimestamp
, set this parameter to{"ts": "timestamp"}
to map this key to a column namedts
. - Convert
Dots boolIn Json Keys To Underscores When set to
true
, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.The default is
false
.
- Case
Insensitive bool - When set to
true
, which is the default, Firehose converts JSON keys to lowercase before deserializing them. - Column
To map[string]stringJson Key Mappings - Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example,
timestamp
is a Hive keyword. If you have a JSON key namedtimestamp
, set this parameter to{"ts": "timestamp"}
to map this key to a column namedts
. - Convert
Dots boolIn Json Keys To Underscores When set to
true
, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.The default is
false
.
- case
Insensitive Boolean - When set to
true
, which is the default, Firehose converts JSON keys to lowercase before deserializing them. - column
To Map<String,String>Json Key Mappings - Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example,
timestamp
is a Hive keyword. If you have a JSON key namedtimestamp
, set this parameter to{"ts": "timestamp"}
to map this key to a column namedts
. - convert
Dots BooleanIn Json Keys To Underscores When set to
true
, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.The default is
false
.
- case
Insensitive boolean - When set to
true
, which is the default, Firehose converts JSON keys to lowercase before deserializing them. - column
To {[key: string]: string}Json Key Mappings - Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example,
timestamp
is a Hive keyword. If you have a JSON key namedtimestamp
, set this parameter to{"ts": "timestamp"}
to map this key to a column namedts
. - convert
Dots booleanIn Json Keys To Underscores When set to
true
, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.The default is
false
.
- case_
insensitive bool - When set to
true
, which is the default, Firehose converts JSON keys to lowercase before deserializing them. - column_
to_ Mapping[str, str]json_ key_ mappings - Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example,
timestamp
is a Hive keyword. If you have a JSON key namedtimestamp
, set this parameter to{"ts": "timestamp"}
to map this key to a column namedts
. - convert_
dots_ boolin_ json_ keys_ to_ underscores When set to
true
, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.The default is
false
.
- case
Insensitive Boolean - When set to
true
, which is the default, Firehose converts JSON keys to lowercase before deserializing them. - column
To Map<String>Json Key Mappings - Maps column names to JSON keys that aren't identical to the column names. This is useful when the JSON contains keys that are Hive keywords. For example,
timestamp
is a Hive keyword. If you have a JSON key namedtimestamp
, set this parameter to{"ts": "timestamp"}
to map this key to a column namedts
. - convert
Dots BooleanIn Json Keys To Underscores When set to
true
, specifies that the names of the keys include dots and that you want Firehose to replace them with underscores. This is useful because Apache Hive does not allow dots in column names. For example, if the JSON contains a key whose name is "a.b", you can define the column name to be "a_b" when using this option.The default is
false
.
DeliveryStreamOrcSerDe, DeliveryStreamOrcSerDeArgs
- Block
Size intBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- Bloom
Filter List<string>Columns - The column names for which you want Firehose to create bloom filters. The default is
null
. - Bloom
Filter doubleFalse Positive Probability - The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- Compression string
- The compression code to use over data blocks. The default is
SNAPPY
. - Dictionary
Key doubleThreshold - Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- Enable
Padding bool - Set this to
true
to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse
. - Format
Version string - The version of the file to write. The possible values are
V0_11
andV0_12
. The default isV0_12
. - Padding
Tolerance double A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when
EnablePadding
isfalse
.- Row
Index intStride - The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- Stripe
Size intBytes - The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- Block
Size intBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- Bloom
Filter []stringColumns - The column names for which you want Firehose to create bloom filters. The default is
null
. - Bloom
Filter float64False Positive Probability - The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- Compression string
- The compression code to use over data blocks. The default is
SNAPPY
. - Dictionary
Key float64Threshold - Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- Enable
Padding bool - Set this to
true
to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse
. - Format
Version string - The version of the file to write. The possible values are
V0_11
andV0_12
. The default isV0_12
. - Padding
Tolerance float64 A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when
EnablePadding
isfalse
.- Row
Index intStride - The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- Stripe
Size intBytes - The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- block
Size IntegerBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- bloom
Filter List<String>Columns - The column names for which you want Firehose to create bloom filters. The default is
null
. - bloom
Filter DoubleFalse Positive Probability - The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- compression String
- The compression code to use over data blocks. The default is
SNAPPY
. - dictionary
Key DoubleThreshold - Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- enable
Padding Boolean - Set this to
true
to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse
. - format
Version String - The version of the file to write. The possible values are
V0_11
andV0_12
. The default isV0_12
. - padding
Tolerance Double A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when
EnablePadding
isfalse
.- row
Index IntegerStride - The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- stripe
Size IntegerBytes - The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- block
Size numberBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- bloom
Filter string[]Columns - The column names for which you want Firehose to create bloom filters. The default is
null
. - bloom
Filter numberFalse Positive Probability - The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- compression string
- The compression code to use over data blocks. The default is
SNAPPY
. - dictionary
Key numberThreshold - Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- enable
Padding boolean - Set this to
true
to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse
. - format
Version string - The version of the file to write. The possible values are
V0_11
andV0_12
. The default isV0_12
. - padding
Tolerance number A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when
EnablePadding
isfalse
.- row
Index numberStride - The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- stripe
Size numberBytes - The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- block_
size_ intbytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- bloom_
filter_ Sequence[str]columns - The column names for which you want Firehose to create bloom filters. The default is
null
. - bloom_
filter_ floatfalse_ positive_ probability - The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- compression str
- The compression code to use over data blocks. The default is
SNAPPY
. - dictionary_
key_ floatthreshold - Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- enable_
padding bool - Set this to
true
to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse
. - format_
version str - The version of the file to write. The possible values are
V0_11
andV0_12
. The default isV0_12
. - padding_
tolerance float A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when
EnablePadding
isfalse
.- row_
index_ intstride - The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- stripe_
size_ intbytes - The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
- block
Size NumberBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- bloom
Filter List<String>Columns - The column names for which you want Firehose to create bloom filters. The default is
null
. - bloom
Filter NumberFalse Positive Probability - The Bloom filter false positive probability (FPP). The lower the FPP, the bigger the Bloom filter. The default value is 0.05, the minimum is 0, and the maximum is 1.
- compression String
- The compression code to use over data blocks. The default is
SNAPPY
. - dictionary
Key NumberThreshold - Represents the fraction of the total number of non-null rows. To turn off dictionary encoding, set this fraction to a number that is less than the number of distinct keys in a dictionary. To always use dictionary encoding, set this threshold to 1.
- enable
Padding Boolean - Set this to
true
to indicate that you want stripes to be padded to the HDFS block boundaries. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default isfalse
. - format
Version String - The version of the file to write. The possible values are
V0_11
andV0_12
. The default isV0_12
. - padding
Tolerance Number A number between 0 and 1 that defines the tolerance for block padding as a decimal fraction of stripe size. The default value is 0.05, which means 5 percent of stripe size.
For the default values of 64 MiB ORC stripes and 256 MiB HDFS blocks, the default block padding tolerance of 5 percent reserves a maximum of 3.2 MiB for padding within the 256 MiB block. In such a case, if the available size within the block is more than 3.2 MiB, a new, smaller stripe is inserted to fit within that space. This ensures that no stripe crosses block boundaries and causes remote reads within a node-local task.
Kinesis Data Firehose ignores this parameter when
EnablePadding
isfalse
.- row
Index NumberStride - The number of rows between index entries. The default is 10,000 and the minimum is 1,000.
- stripe
Size NumberBytes - The number of bytes in each stripe. The default is 64 MiB and the minimum is 8 MiB.
DeliveryStreamOutputFormatConfiguration, DeliveryStreamOutputFormatConfigurationArgs
- Serializer
Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream Serializer - Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
- Serializer
Delivery
Stream Serializer - Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
- serializer
Delivery
Stream Serializer - Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
- serializer
Delivery
Stream Serializer - Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
- serializer
Delivery
Stream Serializer - Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
- serializer Property Map
- Specifies which serializer to use. You can choose either the ORC SerDe or the Parquet SerDe. If both are non-null, the server rejects the request.
DeliveryStreamParquetSerDe, DeliveryStreamParquetSerDeArgs
- Block
Size intBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- Compression string
- The compression code to use over data blocks. The possible values are
UNCOMPRESSED
,SNAPPY
, andGZIP
, with the default beingSNAPPY
. UseSNAPPY
for higher decompression speed. UseGZIP
if the compression ratio is more important than speed. - Enable
Dictionary boolCompression - Indicates whether to enable dictionary compression.
- Max
Padding intBytes - The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- Page
Size intBytes - The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- Writer
Version string - Indicates the version of row format to output. The possible values are
V1
andV2
. The default isV1
.
- Block
Size intBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- Compression string
- The compression code to use over data blocks. The possible values are
UNCOMPRESSED
,SNAPPY
, andGZIP
, with the default beingSNAPPY
. UseSNAPPY
for higher decompression speed. UseGZIP
if the compression ratio is more important than speed. - Enable
Dictionary boolCompression - Indicates whether to enable dictionary compression.
- Max
Padding intBytes - The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- Page
Size intBytes - The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- Writer
Version string - Indicates the version of row format to output. The possible values are
V1
andV2
. The default isV1
.
- block
Size IntegerBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- compression String
- The compression code to use over data blocks. The possible values are
UNCOMPRESSED
,SNAPPY
, andGZIP
, with the default beingSNAPPY
. UseSNAPPY
for higher decompression speed. UseGZIP
if the compression ratio is more important than speed. - enable
Dictionary BooleanCompression - Indicates whether to enable dictionary compression.
- max
Padding IntegerBytes - The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- page
Size IntegerBytes - The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- writer
Version String - Indicates the version of row format to output. The possible values are
V1
andV2
. The default isV1
.
- block
Size numberBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- compression string
- The compression code to use over data blocks. The possible values are
UNCOMPRESSED
,SNAPPY
, andGZIP
, with the default beingSNAPPY
. UseSNAPPY
for higher decompression speed. UseGZIP
if the compression ratio is more important than speed. - enable
Dictionary booleanCompression - Indicates whether to enable dictionary compression.
- max
Padding numberBytes - The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- page
Size numberBytes - The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- writer
Version string - Indicates the version of row format to output. The possible values are
V1
andV2
. The default isV1
.
- block_
size_ intbytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- compression str
- The compression code to use over data blocks. The possible values are
UNCOMPRESSED
,SNAPPY
, andGZIP
, with the default beingSNAPPY
. UseSNAPPY
for higher decompression speed. UseGZIP
if the compression ratio is more important than speed. - enable_
dictionary_ boolcompression - Indicates whether to enable dictionary compression.
- max_
padding_ intbytes - The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- page_
size_ intbytes - The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- writer_
version str - Indicates the version of row format to output. The possible values are
V1
andV2
. The default isV1
.
- block
Size NumberBytes - The Hadoop Distributed File System (HDFS) block size. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 256 MiB and the minimum is 64 MiB. Firehose uses this value for padding calculations.
- compression String
- The compression code to use over data blocks. The possible values are
UNCOMPRESSED
,SNAPPY
, andGZIP
, with the default beingSNAPPY
. UseSNAPPY
for higher decompression speed. UseGZIP
if the compression ratio is more important than speed. - enable
Dictionary BooleanCompression - Indicates whether to enable dictionary compression.
- max
Padding NumberBytes - The maximum amount of padding to apply. This is useful if you intend to copy the data from Amazon S3 to HDFS before querying. The default is 0.
- page
Size NumberBytes - The Parquet page size. Column chunks are divided into pages. A page is conceptually an indivisible unit (in terms of compression and encoding). The minimum value is 64 KiB and the default is 1 MiB.
- writer
Version String - Indicates the version of row format to output. The possible values are
V1
andV2
. The default isV1
.
DeliveryStreamProcessingConfiguration, DeliveryStreamProcessingConfigurationArgs
- Enabled bool
- Indicates whether data processing is enabled (true) or disabled (false).
- Processors
List<Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processor> - The data processors.
- Enabled bool
- Indicates whether data processing is enabled (true) or disabled (false).
- Processors
[]Delivery
Stream Processor - The data processors.
- enabled Boolean
- Indicates whether data processing is enabled (true) or disabled (false).
- processors
List<Delivery
Stream Processor> - The data processors.
- enabled boolean
- Indicates whether data processing is enabled (true) or disabled (false).
- processors
Delivery
Stream Processor[] - The data processors.
- enabled bool
- Indicates whether data processing is enabled (true) or disabled (false).
- processors
Sequence[Delivery
Stream Processor] - The data processors.
- enabled Boolean
- Indicates whether data processing is enabled (true) or disabled (false).
- processors List<Property Map>
- The data processors.
DeliveryStreamProcessor, DeliveryStreamProcessorArgs
- Type
Pulumi.
Aws Native. Kinesis Firehose. Delivery Stream Processor Type - The type of processor. Valid values:
Lambda
. - Parameters
List<Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processor Parameter> - The processor parameters.
- Type
Delivery
Stream Processor Type - The type of processor. Valid values:
Lambda
. - Parameters
[]Delivery
Stream Processor Parameter - The processor parameters.
- type
Delivery
Stream Processor Type - The type of processor. Valid values:
Lambda
. - parameters
List<Delivery
Stream Processor Parameter> - The processor parameters.
- type
Delivery
Stream Processor Type - The type of processor. Valid values:
Lambda
. - parameters
Delivery
Stream Processor Parameter[] - The processor parameters.
- type
Delivery
Stream Processor Type - The type of processor. Valid values:
Lambda
. - parameters
Sequence[Delivery
Stream Processor Parameter] - The processor parameters.
- type
"Record
De Aggregation" | "Decompression" | "Cloud Watch Log Processing" | "Lambda" | "Metadata Extraction" | "Append Delimiter To Record" - The type of processor. Valid values:
Lambda
. - parameters List<Property Map>
- The processor parameters.
DeliveryStreamProcessorParameter, DeliveryStreamProcessorParameterArgs
- Parameter
Name string - The name of the parameter. Currently the following default values are supported: 3 for
NumberOfRetries
and 60 for theBufferIntervalInSeconds
. TheBufferSizeInMBs
ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB. - Parameter
Value string - The parameter value.
- Parameter
Name string - The name of the parameter. Currently the following default values are supported: 3 for
NumberOfRetries
and 60 for theBufferIntervalInSeconds
. TheBufferSizeInMBs
ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB. - Parameter
Value string - The parameter value.
- parameter
Name String - The name of the parameter. Currently the following default values are supported: 3 for
NumberOfRetries
and 60 for theBufferIntervalInSeconds
. TheBufferSizeInMBs
ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB. - parameter
Value String - The parameter value.
- parameter
Name string - The name of the parameter. Currently the following default values are supported: 3 for
NumberOfRetries
and 60 for theBufferIntervalInSeconds
. TheBufferSizeInMBs
ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB. - parameter
Value string - The parameter value.
- parameter_
name str - The name of the parameter. Currently the following default values are supported: 3 for
NumberOfRetries
and 60 for theBufferIntervalInSeconds
. TheBufferSizeInMBs
ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB. - parameter_
value str - The parameter value.
- parameter
Name String - The name of the parameter. Currently the following default values are supported: 3 for
NumberOfRetries
and 60 for theBufferIntervalInSeconds
. TheBufferSizeInMBs
ranges between 0.2 MB and up to 3MB. The default buffering hint is 1MB for all destinations, except Splunk. For Splunk, the default buffering hint is 256 KB. - parameter
Value String - The parameter value.
DeliveryStreamProcessorType, DeliveryStreamProcessorTypeArgs
- Record
De Aggregation - RecordDeAggregation
- Decompression
- Decompression
- Cloud
Watch Log Processing - CloudWatchLogProcessing
- Lambda
- Lambda
- Metadata
Extraction - MetadataExtraction
- Append
Delimiter To Record - AppendDelimiterToRecord
- Delivery
Stream Processor Type Record De Aggregation - RecordDeAggregation
- Delivery
Stream Processor Type Decompression - Decompression
- Delivery
Stream Processor Type Cloud Watch Log Processing - CloudWatchLogProcessing
- Delivery
Stream Processor Type Lambda - Lambda
- Delivery
Stream Processor Type Metadata Extraction - MetadataExtraction
- Delivery
Stream Processor Type Append Delimiter To Record - AppendDelimiterToRecord
- Record
De Aggregation - RecordDeAggregation
- Decompression
- Decompression
- Cloud
Watch Log Processing - CloudWatchLogProcessing
- Lambda
- Lambda
- Metadata
Extraction - MetadataExtraction
- Append
Delimiter To Record - AppendDelimiterToRecord
- Record
De Aggregation - RecordDeAggregation
- Decompression
- Decompression
- Cloud
Watch Log Processing - CloudWatchLogProcessing
- Lambda
- Lambda
- Metadata
Extraction - MetadataExtraction
- Append
Delimiter To Record - AppendDelimiterToRecord
- RECORD_DE_AGGREGATION
- RecordDeAggregation
- DECOMPRESSION
- Decompression
- CLOUD_WATCH_LOG_PROCESSING
- CloudWatchLogProcessing
- LAMBDA_
- Lambda
- METADATA_EXTRACTION
- MetadataExtraction
- APPEND_DELIMITER_TO_RECORD
- AppendDelimiterToRecord
- "Record
De Aggregation" - RecordDeAggregation
- "Decompression"
- Decompression
- "Cloud
Watch Log Processing" - CloudWatchLogProcessing
- "Lambda"
- Lambda
- "Metadata
Extraction" - MetadataExtraction
- "Append
Delimiter To Record" - AppendDelimiterToRecord
DeliveryStreamRedshiftDestinationConfiguration, DeliveryStreamRedshiftDestinationConfigurationArgs
- Cluster
Jdbcurl string - The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- Copy
Command Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Copy Command - Configures the Amazon Redshift
COPY
command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket. - Role
Arn string - The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- S3Configuration
Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration - The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the
COPY
command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPY
orZIP
because the Amazon RedshiftCOPY
command doesn't support them. - Cloud
Watch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options - The CloudWatch logging options for your delivery stream.
- Password string
- The password for the Amazon Redshift user that you specified in the
Username
property. - Processing
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- Retry
Options Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Redshift Retry Options - The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- S3Backup
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration - The configuration for backup in Amazon S3.
- S3Backup
Mode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Redshift Destination Configuration S3Backup Mode - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
- Secrets
Manager Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Amazon Redshift.
- Username string
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have
INSERT
privileges for copying data from the Amazon S3 bucket to the cluster.
- Cluster
Jdbcurl string - The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- Copy
Command DeliveryStream Copy Command - Configures the Amazon Redshift
COPY
command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket. - Role
Arn string - The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- S3Configuration
Delivery
Stream S3Destination Configuration - The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the
COPY
command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPY
orZIP
because the Amazon RedshiftCOPY
command doesn't support them. - Cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The CloudWatch logging options for your delivery stream.
- Password string
- The password for the Amazon Redshift user that you specified in the
Username
property. - Processing
Configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- Retry
Options DeliveryStream Redshift Retry Options - The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- S3Backup
Configuration DeliveryStream S3Destination Configuration - The configuration for backup in Amazon S3.
- S3Backup
Mode DeliveryStream Redshift Destination Configuration S3Backup Mode - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
- Secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Amazon Redshift.
- Username string
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have
INSERT
privileges for copying data from the Amazon S3 bucket to the cluster.
- cluster
Jdbcurl String - The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- copy
Command DeliveryStream Copy Command - Configures the Amazon Redshift
COPY
command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket. - role
Arn String - The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- s3Configuration
Delivery
Stream S3Destination Configuration - The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the
COPY
command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPY
orZIP
because the Amazon RedshiftCOPY
command doesn't support them. - cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The CloudWatch logging options for your delivery stream.
- password String
- The password for the Amazon Redshift user that you specified in the
Username
property. - processing
Configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry
Options DeliveryStream Redshift Retry Options - The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- s3Backup
Configuration DeliveryStream S3Destination Configuration - The configuration for backup in Amazon S3.
- s3Backup
Mode DeliveryStream Redshift Destination Configuration S3Backup Mode - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
- secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Amazon Redshift.
- username String
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have
INSERT
privileges for copying data from the Amazon S3 bucket to the cluster.
- cluster
Jdbcurl string - The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- copy
Command DeliveryStream Copy Command - Configures the Amazon Redshift
COPY
command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket. - role
Arn string - The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- s3Configuration
Delivery
Stream S3Destination Configuration - The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the
COPY
command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPY
orZIP
because the Amazon RedshiftCOPY
command doesn't support them. - cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The CloudWatch logging options for your delivery stream.
- password string
- The password for the Amazon Redshift user that you specified in the
Username
property. - processing
Configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry
Options DeliveryStream Redshift Retry Options - The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- s3Backup
Configuration DeliveryStream S3Destination Configuration - The configuration for backup in Amazon S3.
- s3Backup
Mode DeliveryStream Redshift Destination Configuration S3Backup Mode - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
- secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Amazon Redshift.
- username string
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have
INSERT
privileges for copying data from the Amazon S3 bucket to the cluster.
- cluster_
jdbcurl str - The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- copy_
command DeliveryStream Copy Command - Configures the Amazon Redshift
COPY
command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket. - role_
arn str - The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- s3_
configuration DeliveryStream S3Destination Configuration - The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the
COPY
command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPY
orZIP
because the Amazon RedshiftCOPY
command doesn't support them. - cloud_
watch_ Deliverylogging_ options Stream Cloud Watch Logging Options - The CloudWatch logging options for your delivery stream.
- password str
- The password for the Amazon Redshift user that you specified in the
Username
property. - processing_
configuration DeliveryStream Processing Configuration - The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry_
options DeliveryStream Redshift Retry Options - The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- s3_
backup_ Deliveryconfiguration Stream S3Destination Configuration - The configuration for backup in Amazon S3.
- s3_
backup_ Deliverymode Stream Redshift Destination Configuration S3Backup Mode - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
- secrets_
manager_ Deliveryconfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Amazon Redshift.
- username str
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have
INSERT
privileges for copying data from the Amazon S3 bucket to the cluster.
- cluster
Jdbcurl String - The connection string that Kinesis Data Firehose uses to connect to the Amazon Redshift cluster.
- copy
Command Property Map - Configures the Amazon Redshift
COPY
command that Kinesis Data Firehose uses to load data into the cluster from the Amazon S3 bucket. - role
Arn String - The ARN of the AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon Redshift Destination in the Amazon Kinesis Data Firehose Developer Guide .
- s3Configuration Property Map
- The S3 bucket where Kinesis Data Firehose first delivers data. After the data is in the bucket, Kinesis Data Firehose uses the
COPY
command to load the data into the Amazon Redshift cluster. For the Amazon S3 bucket's compression format, don't specifySNAPPY
orZIP
because the Amazon RedshiftCOPY
command doesn't support them. - cloud
Watch Property MapLogging Options - The CloudWatch logging options for your delivery stream.
- password String
- The password for the Amazon Redshift user that you specified in the
Username
property. - processing
Configuration Property Map - The data processing configuration for the Kinesis Data Firehose delivery stream.
- retry
Options Property Map - The retry behavior in case Firehose is unable to deliver documents to Amazon Redshift. Default value is 3600 (60 minutes).
- s3Backup
Configuration Property Map - The configuration for backup in Amazon S3.
- s3Backup
Mode "Disabled" | "Enabled" - The Amazon S3 backup mode. After you create a delivery stream, you can update it to enable Amazon S3 backup if it is disabled. If backup is enabled, you can't update the delivery stream to disable it.
- secrets
Manager Property MapConfiguration - The configuration that defines how you access secrets for Amazon Redshift.
- username String
- The Amazon Redshift user that has permission to access the Amazon Redshift cluster. This user must have
INSERT
privileges for copying data from the Amazon S3 bucket to the cluster.
DeliveryStreamRedshiftDestinationConfigurationS3BackupMode, DeliveryStreamRedshiftDestinationConfigurationS3BackupModeArgs
- Disabled
- Disabled
- Enabled
- Enabled
- Delivery
Stream Redshift Destination Configuration S3Backup Mode Disabled - Disabled
- Delivery
Stream Redshift Destination Configuration S3Backup Mode Enabled - Enabled
- Disabled
- Disabled
- Enabled
- Enabled
- Disabled
- Disabled
- Enabled
- Enabled
- DISABLED
- Disabled
- ENABLED
- Enabled
- "Disabled"
- Disabled
- "Enabled"
- Enabled
DeliveryStreamRedshiftRetryOptions, DeliveryStreamRedshiftRetryOptionsArgs
- Duration
In intSeconds - The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of
DurationInSeconds
is 0 (zero) or if the first delivery attempt takes longer than the current value.
- Duration
In intSeconds - The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of
DurationInSeconds
is 0 (zero) or if the first delivery attempt takes longer than the current value.
- duration
In IntegerSeconds - The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of
DurationInSeconds
is 0 (zero) or if the first delivery attempt takes longer than the current value.
- duration
In numberSeconds - The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of
DurationInSeconds
is 0 (zero) or if the first delivery attempt takes longer than the current value.
- duration_
in_ intseconds - The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of
DurationInSeconds
is 0 (zero) or if the first delivery attempt takes longer than the current value.
- duration
In NumberSeconds - The length of time during which Firehose retries delivery after a failure, starting from the initial request and including the first attempt. The default value is 3600 seconds (60 minutes). Firehose does not retry if the value of
DurationInSeconds
is 0 (zero) or if the first delivery attempt takes longer than the current value.
DeliveryStreamRetryOptions, DeliveryStreamRetryOptionsArgs
- Duration
In intSeconds - The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
- Duration
In intSeconds - The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
- duration
In IntegerSeconds - The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
- duration
In numberSeconds - The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
- duration_
in_ intseconds - The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
- duration
In NumberSeconds - The total amount of time that Kinesis Data Firehose spends on retries. This duration starts after the initial attempt to send data to the custom destination via HTTPS endpoint fails. It doesn't include the periods during which Kinesis Data Firehose waits for acknowledgment from the specified destination after each attempt.
DeliveryStreamS3DestinationConfiguration, DeliveryStreamS3DestinationConfigurationArgs
- Bucket
Arn string - The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- Role
Arn string - The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- Buffering
Hints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Buffering Hints - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- Cloud
Watch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options - The CloudWatch logging options for your delivery stream.
- Compression
Format Pulumi.Aws Native. Kinesis Firehose. Delivery Stream S3Destination Configuration Compression Format - The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the
CompressionFormat
content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - Encryption
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Encryption Configuration - Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- Error
Output stringPrefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- Prefix string
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
- Bucket
Arn string - The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- Role
Arn string - The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- Buffering
Hints DeliveryStream Buffering Hints - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- Cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The CloudWatch logging options for your delivery stream.
- Compression
Format DeliveryStream S3Destination Configuration Compression Format - The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the
CompressionFormat
content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - Encryption
Configuration DeliveryStream Encryption Configuration - Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- Error
Output stringPrefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- Prefix string
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
- bucket
Arn String - The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- role
Arn String - The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- buffering
Hints DeliveryStream Buffering Hints - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The CloudWatch logging options for your delivery stream.
- compression
Format DeliveryStream S3Destination Configuration Compression Format - The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the
CompressionFormat
content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - encryption
Configuration DeliveryStream Encryption Configuration - Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- error
Output StringPrefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- prefix String
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
- bucket
Arn string - The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- role
Arn string - The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- buffering
Hints DeliveryStream Buffering Hints - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The CloudWatch logging options for your delivery stream.
- compression
Format DeliveryStream S3Destination Configuration Compression Format - The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the
CompressionFormat
content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - encryption
Configuration DeliveryStream Encryption Configuration - Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- error
Output stringPrefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- prefix string
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
- bucket_
arn str - The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- role_
arn str - The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- buffering_
hints DeliveryStream Buffering Hints - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- cloud_
watch_ Deliverylogging_ options Stream Cloud Watch Logging Options - The CloudWatch logging options for your delivery stream.
- compression_
format DeliveryStream S3Destination Configuration Compression Format - The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the
CompressionFormat
content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - encryption_
configuration DeliveryStream Encryption Configuration - Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- error_
output_ strprefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- prefix str
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
- bucket
Arn String - The Amazon Resource Name (ARN) of the Amazon S3 bucket to send data to.
- role
Arn String - The ARN of an AWS Identity and Access Management (IAM) role that grants Kinesis Data Firehose access to your Amazon S3 bucket and AWS KMS (if you enable data encryption). For more information, see Grant Kinesis Data Firehose Access to an Amazon S3 Destination in the Amazon Kinesis Data Firehose Developer Guide .
- buffering
Hints Property Map - Configures how Kinesis Data Firehose buffers incoming data while delivering it to the Amazon S3 bucket.
- cloud
Watch Property MapLogging Options - The CloudWatch logging options for your delivery stream.
- compression
Format "UNCOMPRESSED" | "GZIP" | "ZIP" | "Snappy" | "HADOOP_SNAPPY" - The type of compression that Kinesis Data Firehose uses to compress the data that it delivers to the Amazon S3 bucket. For valid values, see the
CompressionFormat
content for the S3DestinationConfiguration data type in the Amazon Kinesis Data Firehose API Reference . - encryption
Configuration Property Map - Configures Amazon Simple Storage Service (Amazon S3) server-side encryption. Kinesis Data Firehose uses AWS Key Management Service ( AWS KMS) to encrypt the data that it delivers to your Amazon S3 bucket.
- error
Output StringPrefix - A prefix that Kinesis Data Firehose evaluates and adds to failed records before writing them to S3. This prefix appears immediately following the bucket name. For information about how to specify this prefix, see Custom Prefixes for Amazon S3 Objects .
- prefix String
- A prefix that Kinesis Data Firehose adds to the files that it delivers to the Amazon S3 bucket. The prefix helps you identify the files that Kinesis Data Firehose delivered.
DeliveryStreamS3DestinationConfigurationCompressionFormat, DeliveryStreamS3DestinationConfigurationCompressionFormatArgs
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- Hadoop
Snappy - HADOOP_SNAPPY
- Delivery
Stream S3Destination Configuration Compression Format Uncompressed - UNCOMPRESSED
- Delivery
Stream S3Destination Configuration Compression Format Gzip - GZIP
- Delivery
Stream S3Destination Configuration Compression Format Zip - ZIP
- Delivery
Stream S3Destination Configuration Compression Format Snappy - Snappy
- Delivery
Stream S3Destination Configuration Compression Format Hadoop Snappy - HADOOP_SNAPPY
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- Hadoop
Snappy - HADOOP_SNAPPY
- Uncompressed
- UNCOMPRESSED
- Gzip
- GZIP
- Zip
- ZIP
- Snappy
- Snappy
- Hadoop
Snappy - HADOOP_SNAPPY
- UNCOMPRESSED
- UNCOMPRESSED
- GZIP
- GZIP
- ZIP
- ZIP
- SNAPPY
- Snappy
- HADOOP_SNAPPY
- HADOOP_SNAPPY
- "UNCOMPRESSED"
- UNCOMPRESSED
- "GZIP"
- GZIP
- "ZIP"
- ZIP
- "Snappy"
- Snappy
- "HADOOP_SNAPPY"
- HADOOP_SNAPPY
DeliveryStreamSchemaConfiguration, DeliveryStreamSchemaConfigurationArgs
- Catalog
Id string - The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- Database
Name string Specifies the name of the AWS Glue database that contains the schema for the output data.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theDatabaseName
property is required and its value must be specified.- Region string
- If you don't specify an AWS Region, the default is the current Region.
- Role
Arn string The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theRoleARN
property is required and its value must be specified.- Table
Name string Specifies the AWS Glue table that contains the column information that constitutes your data schema.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theTableName
property is required and its value must be specified.- Version
Id string - Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to
LATEST
, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
- Catalog
Id string - The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- Database
Name string Specifies the name of the AWS Glue database that contains the schema for the output data.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theDatabaseName
property is required and its value must be specified.- Region string
- If you don't specify an AWS Region, the default is the current Region.
- Role
Arn string The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theRoleARN
property is required and its value must be specified.- Table
Name string Specifies the AWS Glue table that contains the column information that constitutes your data schema.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theTableName
property is required and its value must be specified.- Version
Id string - Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to
LATEST
, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
- catalog
Id String - The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- database
Name String Specifies the name of the AWS Glue database that contains the schema for the output data.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theDatabaseName
property is required and its value must be specified.- region String
- If you don't specify an AWS Region, the default is the current Region.
- role
Arn String The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theRoleARN
property is required and its value must be specified.- table
Name String Specifies the AWS Glue table that contains the column information that constitutes your data schema.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theTableName
property is required and its value must be specified.- version
Id String - Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to
LATEST
, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
- catalog
Id string - The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- database
Name string Specifies the name of the AWS Glue database that contains the schema for the output data.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theDatabaseName
property is required and its value must be specified.- region string
- If you don't specify an AWS Region, the default is the current Region.
- role
Arn string The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theRoleARN
property is required and its value must be specified.- table
Name string Specifies the AWS Glue table that contains the column information that constitutes your data schema.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theTableName
property is required and its value must be specified.- version
Id string - Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to
LATEST
, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
- catalog_
id str - The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- database_
name str Specifies the name of the AWS Glue database that contains the schema for the output data.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theDatabaseName
property is required and its value must be specified.- region str
- If you don't specify an AWS Region, the default is the current Region.
- role_
arn str The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theRoleARN
property is required and its value must be specified.- table_
name str Specifies the AWS Glue table that contains the column information that constitutes your data schema.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theTableName
property is required and its value must be specified.- version_
id str - Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to
LATEST
, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
- catalog
Id String - The ID of the AWS Glue Data Catalog. If you don't supply this, the AWS account ID is used by default.
- database
Name String Specifies the name of the AWS Glue database that contains the schema for the output data.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theDatabaseName
property is required and its value must be specified.- region String
- If you don't specify an AWS Region, the default is the current Region.
- role
Arn String The role that Firehose can use to access AWS Glue. This role must be in the same account you use for Firehose. Cross-account roles aren't allowed.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theRoleARN
property is required and its value must be specified.- table
Name String Specifies the AWS Glue table that contains the column information that constitutes your data schema.
If the
SchemaConfiguration
request parameter is used as part of invoking theCreateDeliveryStream
API, then theTableName
property is required and its value must be specified.- version
Id String - Specifies the table version for the output data schema. If you don't specify this version ID, or if you set it to
LATEST
, Firehose uses the most recent version. This means that any updates to the table are automatically picked up.
DeliveryStreamSecretsManagerConfiguration, DeliveryStreamSecretsManagerConfigurationArgs
- Enabled bool
- Specifies whether you want to use the the secrets manager feature. When set as
True
the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalse
Firehose falls back to the credentials in the destination configuration. - Role
Arn string - Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- Secret
Arn string - The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the delivery stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to
True
.
- Enabled bool
- Specifies whether you want to use the the secrets manager feature. When set as
True
the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalse
Firehose falls back to the credentials in the destination configuration. - Role
Arn string - Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- Secret
Arn string - The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the delivery stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to
True
.
- enabled Boolean
- Specifies whether you want to use the the secrets manager feature. When set as
True
the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalse
Firehose falls back to the credentials in the destination configuration. - role
Arn String - Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- secret
Arn String - The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the delivery stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to
True
.
- enabled boolean
- Specifies whether you want to use the the secrets manager feature. When set as
True
the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalse
Firehose falls back to the credentials in the destination configuration. - role
Arn string - Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- secret
Arn string - The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the delivery stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to
True
.
- enabled bool
- Specifies whether you want to use the the secrets manager feature. When set as
True
the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalse
Firehose falls back to the credentials in the destination configuration. - role_
arn str - Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- secret_
arn str - The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the delivery stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to
True
.
- enabled Boolean
- Specifies whether you want to use the the secrets manager feature. When set as
True
the secrets manager configuration overwrites the existing secrets in the destination configuration. When it's set toFalse
Firehose falls back to the credentials in the destination configuration. - role
Arn String - Specifies the role that Firehose assumes when calling the Secrets Manager API operation. When you provide the role, it overrides any destination specific role defined in the destination configuration. If you do not provide the then we use the destination specific role. This parameter is required for Splunk.
- secret
Arn String - The ARN of the secret that stores your credentials. It must be in the same region as the Firehose stream and the role. The secret ARN can reside in a different account than the delivery stream and role as Firehose supports cross-account secret access. This parameter is required when Enabled is set to
True
.
DeliveryStreamSerializer, DeliveryStreamSerializerArgs
- Orc
Ser Pulumi.De Aws Native. Kinesis Firehose. Inputs. Delivery Stream Orc Ser De - A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- Parquet
Ser Pulumi.De Aws Native. Kinesis Firehose. Inputs. Delivery Stream Parquet Ser De - A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
- Orc
Ser DeliveryDe Stream Orc Ser De - A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- Parquet
Ser DeliveryDe Stream Parquet Ser De - A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
- orc
Ser DeliveryDe Stream Orc Ser De - A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- parquet
Ser DeliveryDe Stream Parquet Ser De - A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
- orc
Ser DeliveryDe Stream Orc Ser De - A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- parquet
Ser DeliveryDe Stream Parquet Ser De - A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
- orc_
ser_ Deliveryde Stream Orc Ser De - A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- parquet_
ser_ Deliveryde Stream Parquet Ser De - A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
- orc
Ser Property MapDe - A serializer to use for converting data to the ORC format before storing it in Amazon S3. For more information, see Apache ORC .
- parquet
Ser Property MapDe - A serializer to use for converting data to the Parquet format before storing it in Amazon S3. For more information, see Apache Parquet .
DeliveryStreamSnowflakeBufferingHints, DeliveryStreamSnowflakeBufferingHintsArgs
- Interval
In intSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- Size
In intMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
- Interval
In intSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- Size
In intMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
- interval
In IntegerSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- size
In IntegerMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
- interval
In numberSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- size
In numberMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
- interval_
in_ intseconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- size_
in_ intmbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
- interval
In NumberSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 0.
- size
In NumberMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 128.
DeliveryStreamSnowflakeDestinationConfiguration, DeliveryStreamSnowflakeDestinationConfigurationArgs
- Account
Url string - URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- Database string
- All data in Snowflake is maintained in databases.
- Role
Arn string - The Amazon Resource Name (ARN) of the Snowflake role
- S3Configuration
Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration - Schema string
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- Table string
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- Buffering
Hints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Snowflake Buffering Hints - Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- Cloud
Watch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options - Content
Column stringName - The name of the record content column
- Data
Loading Pulumi.Option Aws Native. Kinesis Firehose. Delivery Stream Snowflake Destination Configuration Data Loading Option - Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- Key
Passphrase string - Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- Meta
Data stringColumn Name - The name of the record metadata column
- Private
Key string - The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- Processing
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration - Specifies configuration for Snowflake.
- Retry
Options Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Snowflake Retry Options - The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- S3Backup
Mode Pulumi.Aws Native. Kinesis Firehose. Delivery Stream Snowflake Destination Configuration S3Backup Mode - Choose an S3 backup mode
- Secrets
Manager Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Snowflake.
- Snowflake
Role Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Snowflake Role Configuration - Optionally configure a Snowflake role. Otherwise the default user role will be used.
- Snowflake
Vpc Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Snowflake Vpc Configuration - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- User string
- User login name for the Snowflake account.
- Account
Url string - URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- Database string
- All data in Snowflake is maintained in databases.
- Role
Arn string - The Amazon Resource Name (ARN) of the Snowflake role
- S3Configuration
Delivery
Stream S3Destination Configuration - Schema string
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- Table string
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- Buffering
Hints DeliveryStream Snowflake Buffering Hints - Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- Cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - Content
Column stringName - The name of the record content column
- Data
Loading DeliveryOption Stream Snowflake Destination Configuration Data Loading Option - Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- Key
Passphrase string - Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- Meta
Data stringColumn Name - The name of the record metadata column
- Private
Key string - The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- Processing
Configuration DeliveryStream Processing Configuration - Specifies configuration for Snowflake.
- Retry
Options DeliveryStream Snowflake Retry Options - The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- S3Backup
Mode DeliveryStream Snowflake Destination Configuration S3Backup Mode - Choose an S3 backup mode
- Secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Snowflake.
- Snowflake
Role DeliveryConfiguration Stream Snowflake Role Configuration - Optionally configure a Snowflake role. Otherwise the default user role will be used.
- Snowflake
Vpc DeliveryConfiguration Stream Snowflake Vpc Configuration - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- User string
- User login name for the Snowflake account.
- account
Url String - URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- database String
- All data in Snowflake is maintained in databases.
- role
Arn String - The Amazon Resource Name (ARN) of the Snowflake role
- s3Configuration
Delivery
Stream S3Destination Configuration - schema String
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- table String
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- buffering
Hints DeliveryStream Snowflake Buffering Hints - Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - content
Column StringName - The name of the record content column
- data
Loading DeliveryOption Stream Snowflake Destination Configuration Data Loading Option - Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- key
Passphrase String - Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- meta
Data StringColumn Name - The name of the record metadata column
- private
Key String - The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- processing
Configuration DeliveryStream Processing Configuration - Specifies configuration for Snowflake.
- retry
Options DeliveryStream Snowflake Retry Options - The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- s3Backup
Mode DeliveryStream Snowflake Destination Configuration S3Backup Mode - Choose an S3 backup mode
- secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Snowflake.
- snowflake
Role DeliveryConfiguration Stream Snowflake Role Configuration - Optionally configure a Snowflake role. Otherwise the default user role will be used.
- snowflake
Vpc DeliveryConfiguration Stream Snowflake Vpc Configuration - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- user String
- User login name for the Snowflake account.
- account
Url string - URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- database string
- All data in Snowflake is maintained in databases.
- role
Arn string - The Amazon Resource Name (ARN) of the Snowflake role
- s3Configuration
Delivery
Stream S3Destination Configuration - schema string
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- table string
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- buffering
Hints DeliveryStream Snowflake Buffering Hints - Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - content
Column stringName - The name of the record content column
- data
Loading DeliveryOption Stream Snowflake Destination Configuration Data Loading Option - Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- key
Passphrase string - Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- meta
Data stringColumn Name - The name of the record metadata column
- private
Key string - The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- processing
Configuration DeliveryStream Processing Configuration - Specifies configuration for Snowflake.
- retry
Options DeliveryStream Snowflake Retry Options - The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- s3Backup
Mode DeliveryStream Snowflake Destination Configuration S3Backup Mode - Choose an S3 backup mode
- secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Snowflake.
- snowflake
Role DeliveryConfiguration Stream Snowflake Role Configuration - Optionally configure a Snowflake role. Otherwise the default user role will be used.
- snowflake
Vpc DeliveryConfiguration Stream Snowflake Vpc Configuration - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- user string
- User login name for the Snowflake account.
- account_
url str - URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- database str
- All data in Snowflake is maintained in databases.
- role_
arn str - The Amazon Resource Name (ARN) of the Snowflake role
- s3_
configuration DeliveryStream S3Destination Configuration - schema str
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- table str
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- buffering_
hints DeliveryStream Snowflake Buffering Hints - Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- cloud_
watch_ Deliverylogging_ options Stream Cloud Watch Logging Options - content_
column_ strname - The name of the record content column
- data_
loading_ Deliveryoption Stream Snowflake Destination Configuration Data Loading Option - Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- key_
passphrase str - Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- meta_
data_ strcolumn_ name - The name of the record metadata column
- private_
key str - The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- processing_
configuration DeliveryStream Processing Configuration - Specifies configuration for Snowflake.
- retry_
options DeliveryStream Snowflake Retry Options - The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- s3_
backup_ Deliverymode Stream Snowflake Destination Configuration S3Backup Mode - Choose an S3 backup mode
- secrets_
manager_ Deliveryconfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Snowflake.
- snowflake_
role_ Deliveryconfiguration Stream Snowflake Role Configuration - Optionally configure a Snowflake role. Otherwise the default user role will be used.
- snowflake_
vpc_ Deliveryconfiguration Stream Snowflake Vpc Configuration - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- user str
- User login name for the Snowflake account.
- account
Url String - URL for accessing your Snowflake account. This URL must include your account identifier . Note that the protocol (https://) and port number are optional.
- database String
- All data in Snowflake is maintained in databases.
- role
Arn String - The Amazon Resource Name (ARN) of the Snowflake role
- s3Configuration Property Map
- schema String
- Each database consists of one or more schemas, which are logical groupings of database objects, such as tables and views
- table String
- All data in Snowflake is stored in database tables, logically structured as collections of columns and rows.
- buffering
Hints Property Map - Describes the buffering to perform before delivering data to the Snowflake destination. If you do not specify any value, Firehose uses the default values.
- cloud
Watch Property MapLogging Options - content
Column StringName - The name of the record content column
- data
Loading "JSON_MAPPING" | "VARIANT_CONTENT_MAPPING" | "VARIANT_CONTENT_AND_METADATA_MAPPING"Option - Choose to load JSON keys mapped to table column names or choose to split the JSON payload where content is mapped to a record content column and source metadata is mapped to a record metadata column.
- key
Passphrase String - Passphrase to decrypt the private key when the key is encrypted. For information, see Using Key Pair Authentication & Key Rotation .
- meta
Data StringColumn Name - The name of the record metadata column
- private
Key String - The private key used to encrypt your Snowflake client. For information, see Using Key Pair Authentication & Key Rotation .
- processing
Configuration Property Map - Specifies configuration for Snowflake.
- retry
Options Property Map - The time period where Firehose will retry sending data to the chosen HTTP endpoint.
- s3Backup
Mode "FailedData Only" | "All Data" - Choose an S3 backup mode
- secrets
Manager Property MapConfiguration - The configuration that defines how you access secrets for Snowflake.
- snowflake
Role Property MapConfiguration - Optionally configure a Snowflake role. Otherwise the default user role will be used.
- snowflake
Vpc Property MapConfiguration - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- user String
- User login name for the Snowflake account.
DeliveryStreamSnowflakeDestinationConfigurationDataLoadingOption, DeliveryStreamSnowflakeDestinationConfigurationDataLoadingOptionArgs
- Json
Mapping - JSON_MAPPING
- Variant
Content Mapping - VARIANT_CONTENT_MAPPING
- Variant
Content And Metadata Mapping - VARIANT_CONTENT_AND_METADATA_MAPPING
- Delivery
Stream Snowflake Destination Configuration Data Loading Option Json Mapping - JSON_MAPPING
- Delivery
Stream Snowflake Destination Configuration Data Loading Option Variant Content Mapping - VARIANT_CONTENT_MAPPING
- Delivery
Stream Snowflake Destination Configuration Data Loading Option Variant Content And Metadata Mapping - VARIANT_CONTENT_AND_METADATA_MAPPING
- Json
Mapping - JSON_MAPPING
- Variant
Content Mapping - VARIANT_CONTENT_MAPPING
- Variant
Content And Metadata Mapping - VARIANT_CONTENT_AND_METADATA_MAPPING
- Json
Mapping - JSON_MAPPING
- Variant
Content Mapping - VARIANT_CONTENT_MAPPING
- Variant
Content And Metadata Mapping - VARIANT_CONTENT_AND_METADATA_MAPPING
- JSON_MAPPING
- JSON_MAPPING
- VARIANT_CONTENT_MAPPING
- VARIANT_CONTENT_MAPPING
- VARIANT_CONTENT_AND_METADATA_MAPPING
- VARIANT_CONTENT_AND_METADATA_MAPPING
- "JSON_MAPPING"
- JSON_MAPPING
- "VARIANT_CONTENT_MAPPING"
- VARIANT_CONTENT_MAPPING
- "VARIANT_CONTENT_AND_METADATA_MAPPING"
- VARIANT_CONTENT_AND_METADATA_MAPPING
DeliveryStreamSnowflakeDestinationConfigurationS3BackupMode, DeliveryStreamSnowflakeDestinationConfigurationS3BackupModeArgs
- Failed
Data Only - FailedDataOnly
- All
Data - AllData
- Delivery
Stream Snowflake Destination Configuration S3Backup Mode Failed Data Only - FailedDataOnly
- Delivery
Stream Snowflake Destination Configuration S3Backup Mode All Data - AllData
- Failed
Data Only - FailedDataOnly
- All
Data - AllData
- Failed
Data Only - FailedDataOnly
- All
Data - AllData
- FAILED_DATA_ONLY
- FailedDataOnly
- ALL_DATA
- AllData
- "Failed
Data Only" - FailedDataOnly
- "All
Data" - AllData
DeliveryStreamSnowflakeRetryOptions, DeliveryStreamSnowflakeRetryOptionsArgs
- Duration
In intSeconds - the time period where Firehose will retry sending data to the chosen HTTP endpoint.
- Duration
In intSeconds - the time period where Firehose will retry sending data to the chosen HTTP endpoint.
- duration
In IntegerSeconds - the time period where Firehose will retry sending data to the chosen HTTP endpoint.
- duration
In numberSeconds - the time period where Firehose will retry sending data to the chosen HTTP endpoint.
- duration_
in_ intseconds - the time period where Firehose will retry sending data to the chosen HTTP endpoint.
- duration
In NumberSeconds - the time period where Firehose will retry sending data to the chosen HTTP endpoint.
DeliveryStreamSnowflakeRoleConfiguration, DeliveryStreamSnowflakeRoleConfigurationArgs
- Enabled bool
- Enable Snowflake role
- Snowflake
Role string - The Snowflake role you wish to configure
- Enabled bool
- Enable Snowflake role
- Snowflake
Role string - The Snowflake role you wish to configure
- enabled Boolean
- Enable Snowflake role
- snowflake
Role String - The Snowflake role you wish to configure
- enabled boolean
- Enable Snowflake role
- snowflake
Role string - The Snowflake role you wish to configure
- enabled bool
- Enable Snowflake role
- snowflake_
role str - The Snowflake role you wish to configure
- enabled Boolean
- Enable Snowflake role
- snowflake
Role String - The Snowflake role you wish to configure
DeliveryStreamSnowflakeVpcConfiguration, DeliveryStreamSnowflakeVpcConfigurationArgs
- Private
Link stringVpce Id - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- Private
Link stringVpce Id - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- private
Link StringVpce Id - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- private
Link stringVpce Id - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- private_
link_ strvpce_ id - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
- private
Link StringVpce Id - The VPCE ID for Firehose to privately connect with Snowflake. The ID format is com.amazonaws.vpce.[region].vpce-svc-<[id]>. For more information, see Amazon PrivateLink & Snowflake
DeliveryStreamSplunkBufferingHints, DeliveryStreamSplunkBufferingHintsArgs
- Interval
In intSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- Size
In intMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
- Interval
In intSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- Size
In intMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
- interval
In IntegerSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- size
In IntegerMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
- interval
In numberSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- size
In numberMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
- interval_
in_ intseconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- size_
in_ intmbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
- interval
In NumberSeconds - Buffer incoming data for the specified period of time, in seconds, before delivering it to the destination. The default value is 60 (1 minute).
- size
In NumberMbs - Buffer incoming data to the specified size, in MBs, before delivering it to the destination. The default value is 5.
DeliveryStreamSplunkDestinationConfiguration, DeliveryStreamSplunkDestinationConfigurationArgs
- Hec
Endpoint string - The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- Hec
Endpoint Pulumi.Type Aws Native. Kinesis Firehose. Delivery Stream Splunk Destination Configuration Hec Endpoint Type - This type can be either
Raw
orEvent
. - S3Configuration
Pulumi.
Aws Native. Kinesis Firehose. Inputs. Delivery Stream S3Destination Configuration - The configuration for the backup Amazon S3 location.
- Buffering
Hints Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Splunk Buffering Hints - The buffering options. If no value is specified, the default values for Splunk are used.
- Cloud
Watch Pulumi.Logging Options Aws Native. Kinesis Firehose. Inputs. Delivery Stream Cloud Watch Logging Options - The Amazon CloudWatch logging options for your delivery stream.
- Hec
Acknowledgment intTimeout In Seconds - The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- Hec
Token string - This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- Processing
Configuration Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Processing Configuration - The data processing configuration.
- Retry
Options Pulumi.Aws Native. Kinesis Firehose. Inputs. Delivery Stream Splunk Retry Options - The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- S3Backup
Mode string Defines how documents should be delivered to Amazon S3. When set to
FailedEventsOnly
, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set toAllEvents
, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value isFailedEventsOnly
.You can update this backup mode from
FailedEventsOnly
toAllEvents
. You can't update it fromAllEvents
toFailedEventsOnly
.- Secrets
Manager Pulumi.Configuration Aws Native. Kinesis Firehose. Inputs. Delivery Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Splunk.
- Hec
Endpoint string - The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- Hec
Endpoint DeliveryType Stream Splunk Destination Configuration Hec Endpoint Type - This type can be either
Raw
orEvent
. - S3Configuration
Delivery
Stream S3Destination Configuration - The configuration for the backup Amazon S3 location.
- Buffering
Hints DeliveryStream Splunk Buffering Hints - The buffering options. If no value is specified, the default values for Splunk are used.
- Cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The Amazon CloudWatch logging options for your delivery stream.
- Hec
Acknowledgment intTimeout In Seconds - The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- Hec
Token string - This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- Processing
Configuration DeliveryStream Processing Configuration - The data processing configuration.
- Retry
Options DeliveryStream Splunk Retry Options - The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- S3Backup
Mode string Defines how documents should be delivered to Amazon S3. When set to
FailedEventsOnly
, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set toAllEvents
, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value isFailedEventsOnly
.You can update this backup mode from
FailedEventsOnly
toAllEvents
. You can't update it fromAllEvents
toFailedEventsOnly
.- Secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Splunk.
- hec
Endpoint String - The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- hec
Endpoint DeliveryType Stream Splunk Destination Configuration Hec Endpoint Type - This type can be either
Raw
orEvent
. - s3Configuration
Delivery
Stream S3Destination Configuration - The configuration for the backup Amazon S3 location.
- buffering
Hints DeliveryStream Splunk Buffering Hints - The buffering options. If no value is specified, the default values for Splunk are used.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The Amazon CloudWatch logging options for your delivery stream.
- hec
Acknowledgment IntegerTimeout In Seconds - The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- hec
Token String - This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- processing
Configuration DeliveryStream Processing Configuration - The data processing configuration.
- retry
Options DeliveryStream Splunk Retry Options - The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- s3Backup
Mode String Defines how documents should be delivered to Amazon S3. When set to
FailedEventsOnly
, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set toAllEvents
, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value isFailedEventsOnly
.You can update this backup mode from
FailedEventsOnly
toAllEvents
. You can't update it fromAllEvents
toFailedEventsOnly
.- secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Splunk.
- hec
Endpoint string - The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- hec
Endpoint DeliveryType Stream Splunk Destination Configuration Hec Endpoint Type - This type can be either
Raw
orEvent
. - s3Configuration
Delivery
Stream S3Destination Configuration - The configuration for the backup Amazon S3 location.
- buffering
Hints DeliveryStream Splunk Buffering Hints - The buffering options. If no value is specified, the default values for Splunk are used.
- cloud
Watch DeliveryLogging Options Stream Cloud Watch Logging Options - The Amazon CloudWatch logging options for your delivery stream.
- hec
Acknowledgment numberTimeout In Seconds - The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- hec
Token string - This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- processing
Configuration DeliveryStream Processing Configuration - The data processing configuration.
- retry
Options DeliveryStream Splunk Retry Options - The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- s3Backup
Mode string Defines how documents should be delivered to Amazon S3. When set to
FailedEventsOnly
, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set toAllEvents
, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value isFailedEventsOnly
.You can update this backup mode from
FailedEventsOnly
toAllEvents
. You can't update it fromAllEvents
toFailedEventsOnly
.- secrets
Manager DeliveryConfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Splunk.
- hec_
endpoint str - The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- hec_
endpoint_ Deliverytype Stream Splunk Destination Configuration Hec Endpoint Type - This type can be either
Raw
orEvent
. - s3_
configuration DeliveryStream S3Destination Configuration - The configuration for the backup Amazon S3 location.
- buffering_
hints DeliveryStream Splunk Buffering Hints - The buffering options. If no value is specified, the default values for Splunk are used.
- cloud_
watch_ Deliverylogging_ options Stream Cloud Watch Logging Options - The Amazon CloudWatch logging options for your delivery stream.
- hec_
acknowledgment_ inttimeout_ in_ seconds - The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- hec_
token str - This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- processing_
configuration DeliveryStream Processing Configuration - The data processing configuration.
- retry_
options DeliveryStream Splunk Retry Options - The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- s3_
backup_ strmode Defines how documents should be delivered to Amazon S3. When set to
FailedEventsOnly
, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set toAllEvents
, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value isFailedEventsOnly
.You can update this backup mode from
FailedEventsOnly
toAllEvents
. You can't update it fromAllEvents
toFailedEventsOnly
.- secrets_
manager_ Deliveryconfiguration Stream Secrets Manager Configuration - The configuration that defines how you access secrets for Splunk.
- hec
Endpoint String - The HTTP Event Collector (HEC) endpoint to which Firehose sends your data.
- hec
Endpoint "Raw" | "Event"Type - This type can be either
Raw
orEvent
. - s3Configuration Property Map
- The configuration for the backup Amazon S3 location.
- buffering
Hints Property Map - The buffering options. If no value is specified, the default values for Splunk are used.
- cloud
Watch Property MapLogging Options - The Amazon CloudWatch logging options for your delivery stream.
- hec
Acknowledgment NumberTimeout In Seconds - The amount of time that Firehose waits to receive an acknowledgment from Splunk after it sends it data. At the end of the timeout period, Firehose either tries to send the data again or considers it an error, based on your retry settings.
- hec
Token String - This is a GUID that you obtain from your Splunk cluster when you create a new HEC endpoint.
- processing
Configuration Property Map - The data processing configuration.
- retry
Options Property Map - The retry behavior in case Firehose is unable to deliver data to Splunk, or if it doesn't receive an acknowledgment of receipt from Splunk.
- s3Backup
Mode String Defines how documents should be delivered to Amazon S3. When set to
FailedEventsOnly
, Firehose writes any data that could not be indexed to the configured Amazon S3 destination. When set toAllEvents
, Firehose delivers all incoming records to Amazon S3, and also writes failed documents to Amazon S3. The default value isFailedEventsOnly
.You can update this backup mode from
FailedEventsOnly
toAllEvents
. You can't update it fromAllEvents
toFailedEventsOnly
.- secrets
Manager Property MapConfiguration - The configuration that defines how you access secrets for Splunk.
DeliveryStreamSplunkDestinationConfigurationHecEndpointType, DeliveryStreamSplunkDestinationConfigurationHecEndpointTypeArgs
- Raw
- Raw
- Event
- Event
- Delivery
Stream Splunk Destination Configuration Hec Endpoint Type Raw - Raw
- Delivery
Stream Splunk Destination Configuration Hec Endpoint Type Event - Event
- Raw
- Raw
- Event
- Event
- Raw
- Raw
- Event
- Event
- RAW
- Raw
- EVENT
- Event
- "Raw"
- Raw
- "Event"
- Event
DeliveryStreamSplunkRetryOptions, DeliveryStreamSplunkRetryOptionsArgs
- Duration
In intSeconds - The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
- Duration
In intSeconds - The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
- duration
In IntegerSeconds - The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
- duration
In numberSeconds - The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
- duration_
in_ intseconds - The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
- duration
In NumberSeconds - The total amount of time that Firehose spends on retries. This duration starts after the initial attempt to send data to Splunk fails. It doesn't include the periods during which Firehose waits for acknowledgment from Splunk after each attempt.
DeliveryStreamType, DeliveryStreamTypeArgs
- Direct
Put - DirectPut
- Kinesis
Stream As Source - KinesisStreamAsSource
- Mskas
Source - MSKAsSource
- Delivery
Stream Type Direct Put - DirectPut
- Delivery
Stream Type Kinesis Stream As Source - KinesisStreamAsSource
- Delivery
Stream Type Mskas Source - MSKAsSource
- Direct
Put - DirectPut
- Kinesis
Stream As Source - KinesisStreamAsSource
- Mskas
Source - MSKAsSource
- Direct
Put - DirectPut
- Kinesis
Stream As Source - KinesisStreamAsSource
- Mskas
Source - MSKAsSource
- DIRECT_PUT
- DirectPut
- KINESIS_STREAM_AS_SOURCE
- KinesisStreamAsSource
- MSKAS_SOURCE
- MSKAsSource
- "Direct
Put" - DirectPut
- "Kinesis
Stream As Source" - KinesisStreamAsSource
- "MSKAs
Source" - MSKAsSource
DeliveryStreamVpcConfiguration, DeliveryStreamVpcConfigurationArgs
- Role
Arn string The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions:
ec2:DescribeVpcs
ec2:DescribeVpcAttribute
ec2:DescribeSubnets
ec2:DescribeSecurityGroups
ec2:DescribeNetworkInterfaces
ec2:CreateNetworkInterface
ec2:CreateNetworkInterfacePermission
ec2:DeleteNetworkInterface
If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance.
- Security
Group List<string>Ids - The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- Subnet
Ids List<string> The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs.
The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here.
- Role
Arn string The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions:
ec2:DescribeVpcs
ec2:DescribeVpcAttribute
ec2:DescribeSubnets
ec2:DescribeSecurityGroups
ec2:DescribeNetworkInterfaces
ec2:CreateNetworkInterface
ec2:CreateNetworkInterfacePermission
ec2:DeleteNetworkInterface
If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance.
- Security
Group []stringIds - The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- Subnet
Ids []string The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs.
The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here.
- role
Arn String The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions:
ec2:DescribeVpcs
ec2:DescribeVpcAttribute
ec2:DescribeSubnets
ec2:DescribeSecurityGroups
ec2:DescribeNetworkInterfaces
ec2:CreateNetworkInterface
ec2:CreateNetworkInterfacePermission
ec2:DeleteNetworkInterface
If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance.
- security
Group List<String>Ids - The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- subnet
Ids List<String> The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs.
The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here.
- role
Arn string The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions:
ec2:DescribeVpcs
ec2:DescribeVpcAttribute
ec2:DescribeSubnets
ec2:DescribeSecurityGroups
ec2:DescribeNetworkInterfaces
ec2:CreateNetworkInterface
ec2:CreateNetworkInterfacePermission
ec2:DeleteNetworkInterface
If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance.
- security
Group string[]Ids - The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- subnet
Ids string[] The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs.
The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here.
- role_
arn str The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions:
ec2:DescribeVpcs
ec2:DescribeVpcAttribute
ec2:DescribeSubnets
ec2:DescribeSecurityGroups
ec2:DescribeNetworkInterfaces
ec2:CreateNetworkInterface
ec2:CreateNetworkInterfacePermission
ec2:DeleteNetworkInterface
If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance.
- security_
group_ Sequence[str]ids - The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- subnet_
ids Sequence[str] The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs.
The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here.
- role
Arn String The ARN of the IAM role that you want the delivery stream to use to create endpoints in the destination VPC. You can use your existing Kinesis Data Firehose delivery role or you can specify a new role. In either case, make sure that the role trusts the Kinesis Data Firehose service principal and that it grants the following permissions:
ec2:DescribeVpcs
ec2:DescribeVpcAttribute
ec2:DescribeSubnets
ec2:DescribeSecurityGroups
ec2:DescribeNetworkInterfaces
ec2:CreateNetworkInterface
ec2:CreateNetworkInterfacePermission
ec2:DeleteNetworkInterface
If you revoke these permissions after you create the delivery stream, Kinesis Data Firehose can't scale out by creating more ENIs when necessary. You might therefore see a degradation in performance.
- security
Group List<String>Ids - The IDs of the security groups that you want Kinesis Data Firehose to use when it creates ENIs in the VPC of the Amazon ES destination. You can use the same security group that the Amazon ES domain uses or different ones. If you specify different security groups here, ensure that they allow outbound HTTPS traffic to the Amazon ES domain's security group. Also ensure that the Amazon ES domain's security group allows HTTPS traffic from the security groups specified here. If you use the same security group for both your delivery stream and the Amazon ES domain, make sure the security group inbound rule allows HTTPS traffic.
- subnet
Ids List<String> The IDs of the subnets that Kinesis Data Firehose uses to create ENIs in the VPC of the Amazon ES destination. Make sure that the routing tables and inbound and outbound rules allow traffic to flow from the subnets whose IDs are specified here to the subnets that have the destination Amazon ES endpoints. Kinesis Data Firehose creates at least one ENI in each of the subnets that are specified here. Do not delete or modify these ENIs.
The number of ENIs that Kinesis Data Firehose creates in the subnets specified here scales up and down automatically based on throughput. To enable Kinesis Data Firehose to scale up the number of ENIs to match throughput, ensure that you have sufficient quota. To help you calculate the quota you need, assume that Kinesis Data Firehose can create up to three ENIs for this delivery stream for each of the subnets specified here.
Tag, TagArgs
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.