We recommend new projects start with resources from the AWS provider.
aws-native.lambda.EventSourceMapping
Explore with Pulumi AI
We recommend new projects start with resources from the AWS provider.
The AWS::Lambda::EventSourceMapping
resource creates a mapping between an event source and an LAMlong function. LAM reads items from the event source and triggers the function.
For details about each event source type, see the following topics. In particular, each of the topics describes the required and optional parameters for the specific event source.
- Configuring a Dynamo DB stream as an event source
- Configuring a Kinesis stream as an event source
- Configuring an SQS queue as an event source
- Configuring an MQ broker as an event source
- Configuring MSK as an event source
- Configuring Self-Managed Apache Kafka as an event source
- Configuring Amazon DocumentDB as an event source
Create EventSourceMapping Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new EventSourceMapping(name: string, args: EventSourceMappingArgs, opts?: CustomResourceOptions);
@overload
def EventSourceMapping(resource_name: str,
args: EventSourceMappingArgs,
opts: Optional[ResourceOptions] = None)
@overload
def EventSourceMapping(resource_name: str,
opts: Optional[ResourceOptions] = None,
function_name: Optional[str] = None,
maximum_record_age_in_seconds: Optional[int] = None,
tags: Optional[Sequence[_root_inputs.TagArgs]] = None,
destination_config: Optional[_lambda_.EventSourceMappingDestinationConfigArgs] = None,
document_db_event_source_config: Optional[_lambda_.EventSourceMappingDocumentDbEventSourceConfigArgs] = None,
enabled: Optional[bool] = None,
event_source_arn: Optional[str] = None,
filter_criteria: Optional[_lambda_.EventSourceMappingFilterCriteriaArgs] = None,
batch_size: Optional[int] = None,
function_response_types: Optional[Sequence[lambda_.EventSourceMappingFunctionResponseTypesItem]] = None,
kms_key_arn: Optional[str] = None,
tumbling_window_in_seconds: Optional[int] = None,
bisect_batch_on_function_error: Optional[bool] = None,
queues: Optional[Sequence[str]] = None,
parallelization_factor: Optional[int] = None,
maximum_retry_attempts: Optional[int] = None,
scaling_config: Optional[_lambda_.EventSourceMappingScalingConfigArgs] = None,
self_managed_event_source: Optional[_lambda_.EventSourceMappingSelfManagedEventSourceArgs] = None,
self_managed_kafka_event_source_config: Optional[_lambda_.EventSourceMappingSelfManagedKafkaEventSourceConfigArgs] = None,
source_access_configurations: Optional[Sequence[_lambda_.EventSourceMappingSourceAccessConfigurationArgs]] = None,
starting_position: Optional[str] = None,
starting_position_timestamp: Optional[float] = None,
amazon_managed_kafka_event_source_config: Optional[_lambda_.EventSourceMappingAmazonManagedKafkaEventSourceConfigArgs] = None,
topics: Optional[Sequence[str]] = None,
maximum_batching_window_in_seconds: Optional[int] = None)
func NewEventSourceMapping(ctx *Context, name string, args EventSourceMappingArgs, opts ...ResourceOption) (*EventSourceMapping, error)
public EventSourceMapping(string name, EventSourceMappingArgs args, CustomResourceOptions? opts = null)
public EventSourceMapping(String name, EventSourceMappingArgs args)
public EventSourceMapping(String name, EventSourceMappingArgs args, CustomResourceOptions options)
type: aws-native:lambda:EventSourceMapping
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args EventSourceMappingArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args EventSourceMappingArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args EventSourceMappingArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args EventSourceMappingArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args EventSourceMappingArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
EventSourceMapping Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The EventSourceMapping resource accepts the following input properties:
- Function
Name string The name or ARN of the Lambda function. Name formats
- Function name –
MyFunction
. - Function ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction
. - Version or Alias ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
. - Partial ARN –
123456789012:function:MyFunction
.
The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.
- Function name –
- Amazon
Managed Pulumi.Kafka Event Source Config Aws Native. Lambda. Inputs. Event Source Mapping Amazon Managed Kafka Event Source Config - Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
- Batch
Size int - The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).
- Amazon Kinesis – Default 100. Max 10,000.
- Amazon DynamoDB Streams – Default 100. Max 10,000.
- Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
- Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
- Self-managed Apache Kafka – Default 100. Max 10,000.
- Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
- DocumentDB – Default 100. Max 10,000.
- Bisect
Batch boolOn Function Error - (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
- Destination
Config Pulumi.Aws Native. Lambda. Inputs. Event Source Mapping Destination Config - (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
- Document
Db Pulumi.Event Source Config Aws Native. Lambda. Inputs. Event Source Mapping Document Db Event Source Config - Specific configuration settings for a DocumentDB event source.
- Enabled bool
- When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
- Event
Source stringArn - The Amazon Resource Name (ARN) of the event source.
- Amazon Kinesis – The ARN of the data stream or a stream consumer.
- Amazon DynamoDB Streams – The ARN of the stream.
- Amazon Simple Queue Service – The ARN of the queue.
- Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
- Amazon MQ – The ARN of the broker.
- Amazon DocumentDB – The ARN of the DocumentDB change stream.
- Filter
Criteria Pulumi.Aws Native. Lambda. Inputs. Event Source Mapping Filter Criteria - An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
- Function
Response List<Pulumi.Types Aws Native. Lambda. Event Source Mapping Function Response Types Item> - (Kinesis, DynamoDB Streams, and SQS) A list of current response type enums applied to the event source mapping.
Valid Values:
ReportBatchItemFailures
- Kms
Key stringArn - The ARN of the KMSlong (KMS) customer managed key that Lambda uses to encrypt your function's filter criteria.
- Maximum
Batching intWindow In Seconds - The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function.
Default (, , event sources): 0
Default (, Kafka, , event sources): 500 ms
Related setting: For SQS event sources, when you set
BatchSize
to a value greater than 10, you must setMaximumBatchingWindowInSeconds
to at least 1. - Maximum
Record intAge In Seconds - (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
- Maximum
Retry intAttempts - (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
- Parallelization
Factor int - (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
- Queues List<string>
- (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
- Scaling
Config Pulumi.Aws Native. Lambda. Inputs. Event Source Mapping Scaling Config - (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
- Self
Managed Pulumi.Event Source Aws Native. Lambda. Inputs. Event Source Mapping Self Managed Event Source - The self-managed Apache Kafka cluster for your event source.
- Self
Managed Pulumi.Kafka Event Source Config Aws Native. Lambda. Inputs. Event Source Mapping Self Managed Kafka Event Source Config - Specific configuration settings for a self-managed Apache Kafka event source.
- Source
Access List<Pulumi.Configurations Aws Native. Lambda. Inputs. Event Source Mapping Source Access Configuration> - An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
- Starting
Position string - The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.
- LATEST - Read only new records.
- TRIM_HORIZON - Process all available records.
- AT_TIMESTAMP - Specify a time from which to start reading records.
- Starting
Position doubleTimestamp - With
StartingPosition
set toAT_TIMESTAMP
, the time from which to start reading, in Unix time seconds.StartingPositionTimestamp
cannot be in the future. - List<Pulumi.
Aws Native. Inputs. Tag> A list of tags to add to the event source mapping.
You must have the
lambda:TagResource
,lambda:UntagResource
, andlambda:ListTags
permissions for your IAM principal to manage the AWS CloudFormation stack. If you don't have these permissions, there might be unexpected behavior with stack-level tags propagating to the resource during resource creation and update.- Topics List<string>
- The name of the Kafka topic.
- Tumbling
Window intIn Seconds - (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
- Function
Name string The name or ARN of the Lambda function. Name formats
- Function name –
MyFunction
. - Function ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction
. - Version or Alias ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
. - Partial ARN –
123456789012:function:MyFunction
.
The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.
- Function name –
- Amazon
Managed EventKafka Event Source Config Source Mapping Amazon Managed Kafka Event Source Config Args - Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
- Batch
Size int - The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).
- Amazon Kinesis – Default 100. Max 10,000.
- Amazon DynamoDB Streams – Default 100. Max 10,000.
- Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
- Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
- Self-managed Apache Kafka – Default 100. Max 10,000.
- Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
- DocumentDB – Default 100. Max 10,000.
- Bisect
Batch boolOn Function Error - (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
- Destination
Config EventSource Mapping Destination Config Args - (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
- Document
Db EventEvent Source Config Source Mapping Document Db Event Source Config Args - Specific configuration settings for a DocumentDB event source.
- Enabled bool
- When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
- Event
Source stringArn - The Amazon Resource Name (ARN) of the event source.
- Amazon Kinesis – The ARN of the data stream or a stream consumer.
- Amazon DynamoDB Streams – The ARN of the stream.
- Amazon Simple Queue Service – The ARN of the queue.
- Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
- Amazon MQ – The ARN of the broker.
- Amazon DocumentDB – The ARN of the DocumentDB change stream.
- Filter
Criteria EventSource Mapping Filter Criteria Args - An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
- Function
Response []EventTypes Source Mapping Function Response Types Item - (Kinesis, DynamoDB Streams, and SQS) A list of current response type enums applied to the event source mapping.
Valid Values:
ReportBatchItemFailures
- Kms
Key stringArn - The ARN of the KMSlong (KMS) customer managed key that Lambda uses to encrypt your function's filter criteria.
- Maximum
Batching intWindow In Seconds - The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function.
Default (, , event sources): 0
Default (, Kafka, , event sources): 500 ms
Related setting: For SQS event sources, when you set
BatchSize
to a value greater than 10, you must setMaximumBatchingWindowInSeconds
to at least 1. - Maximum
Record intAge In Seconds - (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
- Maximum
Retry intAttempts - (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
- Parallelization
Factor int - (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
- Queues []string
- (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
- Scaling
Config EventSource Mapping Scaling Config Args - (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
- Self
Managed EventEvent Source Source Mapping Self Managed Event Source Args - The self-managed Apache Kafka cluster for your event source.
- Self
Managed EventKafka Event Source Config Source Mapping Self Managed Kafka Event Source Config Args - Specific configuration settings for a self-managed Apache Kafka event source.
- Source
Access []EventConfigurations Source Mapping Source Access Configuration Args - An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
- Starting
Position string - The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.
- LATEST - Read only new records.
- TRIM_HORIZON - Process all available records.
- AT_TIMESTAMP - Specify a time from which to start reading records.
- Starting
Position float64Timestamp - With
StartingPosition
set toAT_TIMESTAMP
, the time from which to start reading, in Unix time seconds.StartingPositionTimestamp
cannot be in the future. - Tag
Args A list of tags to add to the event source mapping.
You must have the
lambda:TagResource
,lambda:UntagResource
, andlambda:ListTags
permissions for your IAM principal to manage the AWS CloudFormation stack. If you don't have these permissions, there might be unexpected behavior with stack-level tags propagating to the resource during resource creation and update.- Topics []string
- The name of the Kafka topic.
- Tumbling
Window intIn Seconds - (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
- function
Name String The name or ARN of the Lambda function. Name formats
- Function name –
MyFunction
. - Function ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction
. - Version or Alias ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
. - Partial ARN –
123456789012:function:MyFunction
.
The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.
- Function name –
- amazon
Managed EventKafka Event Source Config Source Mapping Amazon Managed Kafka Event Source Config - Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
- batch
Size Integer - The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).
- Amazon Kinesis – Default 100. Max 10,000.
- Amazon DynamoDB Streams – Default 100. Max 10,000.
- Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
- Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
- Self-managed Apache Kafka – Default 100. Max 10,000.
- Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
- DocumentDB – Default 100. Max 10,000.
- bisect
Batch BooleanOn Function Error - (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
- destination
Config EventSource Mapping Destination Config - (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
- document
Db EventEvent Source Config Source Mapping Document Db Event Source Config - Specific configuration settings for a DocumentDB event source.
- enabled Boolean
- When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
- event
Source StringArn - The Amazon Resource Name (ARN) of the event source.
- Amazon Kinesis – The ARN of the data stream or a stream consumer.
- Amazon DynamoDB Streams – The ARN of the stream.
- Amazon Simple Queue Service – The ARN of the queue.
- Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
- Amazon MQ – The ARN of the broker.
- Amazon DocumentDB – The ARN of the DocumentDB change stream.
- filter
Criteria EventSource Mapping Filter Criteria - An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
- function
Response List<EventTypes Source Mapping Function Response Types Item> - (Kinesis, DynamoDB Streams, and SQS) A list of current response type enums applied to the event source mapping.
Valid Values:
ReportBatchItemFailures
- kms
Key StringArn - The ARN of the KMSlong (KMS) customer managed key that Lambda uses to encrypt your function's filter criteria.
- maximum
Batching IntegerWindow In Seconds - The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function.
Default (, , event sources): 0
Default (, Kafka, , event sources): 500 ms
Related setting: For SQS event sources, when you set
BatchSize
to a value greater than 10, you must setMaximumBatchingWindowInSeconds
to at least 1. - maximum
Record IntegerAge In Seconds - (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
- maximum
Retry IntegerAttempts - (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
- parallelization
Factor Integer - (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
- queues List<String>
- (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
- scaling
Config EventSource Mapping Scaling Config - (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
- self
Managed EventEvent Source Source Mapping Self Managed Event Source - The self-managed Apache Kafka cluster for your event source.
- self
Managed EventKafka Event Source Config Source Mapping Self Managed Kafka Event Source Config - Specific configuration settings for a self-managed Apache Kafka event source.
- source
Access List<EventConfigurations Source Mapping Source Access Configuration> - An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
- starting
Position String - The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.
- LATEST - Read only new records.
- TRIM_HORIZON - Process all available records.
- AT_TIMESTAMP - Specify a time from which to start reading records.
- starting
Position DoubleTimestamp - With
StartingPosition
set toAT_TIMESTAMP
, the time from which to start reading, in Unix time seconds.StartingPositionTimestamp
cannot be in the future. - List<Tag>
A list of tags to add to the event source mapping.
You must have the
lambda:TagResource
,lambda:UntagResource
, andlambda:ListTags
permissions for your IAM principal to manage the AWS CloudFormation stack. If you don't have these permissions, there might be unexpected behavior with stack-level tags propagating to the resource during resource creation and update.- topics List<String>
- The name of the Kafka topic.
- tumbling
Window IntegerIn Seconds - (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
- function
Name string The name or ARN of the Lambda function. Name formats
- Function name –
MyFunction
. - Function ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction
. - Version or Alias ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
. - Partial ARN –
123456789012:function:MyFunction
.
The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.
- Function name –
- amazon
Managed EventKafka Event Source Config Source Mapping Amazon Managed Kafka Event Source Config - Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
- batch
Size number - The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).
- Amazon Kinesis – Default 100. Max 10,000.
- Amazon DynamoDB Streams – Default 100. Max 10,000.
- Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
- Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
- Self-managed Apache Kafka – Default 100. Max 10,000.
- Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
- DocumentDB – Default 100. Max 10,000.
- bisect
Batch booleanOn Function Error - (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
- destination
Config EventSource Mapping Destination Config - (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
- document
Db EventEvent Source Config Source Mapping Document Db Event Source Config - Specific configuration settings for a DocumentDB event source.
- enabled boolean
- When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
- event
Source stringArn - The Amazon Resource Name (ARN) of the event source.
- Amazon Kinesis – The ARN of the data stream or a stream consumer.
- Amazon DynamoDB Streams – The ARN of the stream.
- Amazon Simple Queue Service – The ARN of the queue.
- Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
- Amazon MQ – The ARN of the broker.
- Amazon DocumentDB – The ARN of the DocumentDB change stream.
- filter
Criteria EventSource Mapping Filter Criteria - An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
- function
Response EventTypes Source Mapping Function Response Types Item[] - (Kinesis, DynamoDB Streams, and SQS) A list of current response type enums applied to the event source mapping.
Valid Values:
ReportBatchItemFailures
- kms
Key stringArn - The ARN of the KMSlong (KMS) customer managed key that Lambda uses to encrypt your function's filter criteria.
- maximum
Batching numberWindow In Seconds - The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function.
Default (, , event sources): 0
Default (, Kafka, , event sources): 500 ms
Related setting: For SQS event sources, when you set
BatchSize
to a value greater than 10, you must setMaximumBatchingWindowInSeconds
to at least 1. - maximum
Record numberAge In Seconds - (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
- maximum
Retry numberAttempts - (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
- parallelization
Factor number - (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
- queues string[]
- (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
- scaling
Config EventSource Mapping Scaling Config - (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
- self
Managed EventEvent Source Source Mapping Self Managed Event Source - The self-managed Apache Kafka cluster for your event source.
- self
Managed EventKafka Event Source Config Source Mapping Self Managed Kafka Event Source Config - Specific configuration settings for a self-managed Apache Kafka event source.
- source
Access EventConfigurations Source Mapping Source Access Configuration[] - An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
- starting
Position string - The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.
- LATEST - Read only new records.
- TRIM_HORIZON - Process all available records.
- AT_TIMESTAMP - Specify a time from which to start reading records.
- starting
Position numberTimestamp - With
StartingPosition
set toAT_TIMESTAMP
, the time from which to start reading, in Unix time seconds.StartingPositionTimestamp
cannot be in the future. - Tag[]
A list of tags to add to the event source mapping.
You must have the
lambda:TagResource
,lambda:UntagResource
, andlambda:ListTags
permissions for your IAM principal to manage the AWS CloudFormation stack. If you don't have these permissions, there might be unexpected behavior with stack-level tags propagating to the resource during resource creation and update.- topics string[]
- The name of the Kafka topic.
- tumbling
Window numberIn Seconds - (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
- function_
name str The name or ARN of the Lambda function. Name formats
- Function name –
MyFunction
. - Function ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction
. - Version or Alias ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
. - Partial ARN –
123456789012:function:MyFunction
.
The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.
- Function name –
- amazon_
managed_ lambda_.kafka_ event_ source_ config Event Source Mapping Amazon Managed Kafka Event Source Config Args - Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
- batch_
size int - The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).
- Amazon Kinesis – Default 100. Max 10,000.
- Amazon DynamoDB Streams – Default 100. Max 10,000.
- Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
- Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
- Self-managed Apache Kafka – Default 100. Max 10,000.
- Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
- DocumentDB – Default 100. Max 10,000.
- bisect_
batch_ boolon_ function_ error - (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
- destination_
config lambda_.Event Source Mapping Destination Config Args - (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
- document_
db_ lambda_.event_ source_ config Event Source Mapping Document Db Event Source Config Args - Specific configuration settings for a DocumentDB event source.
- enabled bool
- When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
- event_
source_ strarn - The Amazon Resource Name (ARN) of the event source.
- Amazon Kinesis – The ARN of the data stream or a stream consumer.
- Amazon DynamoDB Streams – The ARN of the stream.
- Amazon Simple Queue Service – The ARN of the queue.
- Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
- Amazon MQ – The ARN of the broker.
- Amazon DocumentDB – The ARN of the DocumentDB change stream.
- filter_
criteria lambda_.Event Source Mapping Filter Criteria Args - An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
- function_
response_ Sequence[lambda_.types Event Source Mapping Function Response Types Item] - (Kinesis, DynamoDB Streams, and SQS) A list of current response type enums applied to the event source mapping.
Valid Values:
ReportBatchItemFailures
- kms_
key_ strarn - The ARN of the KMSlong (KMS) customer managed key that Lambda uses to encrypt your function's filter criteria.
- maximum_
batching_ intwindow_ in_ seconds - The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function.
Default (, , event sources): 0
Default (, Kafka, , event sources): 500 ms
Related setting: For SQS event sources, when you set
BatchSize
to a value greater than 10, you must setMaximumBatchingWindowInSeconds
to at least 1. - maximum_
record_ intage_ in_ seconds - (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
- maximum_
retry_ intattempts - (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
- parallelization_
factor int - (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
- queues Sequence[str]
- (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
- scaling_
config lambda_.Event Source Mapping Scaling Config Args - (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
- self_
managed_ lambda_.event_ source Event Source Mapping Self Managed Event Source Args - The self-managed Apache Kafka cluster for your event source.
- self_
managed_ lambda_.kafka_ event_ source_ config Event Source Mapping Self Managed Kafka Event Source Config Args - Specific configuration settings for a self-managed Apache Kafka event source.
- source_
access_ Sequence[lambda_.configurations Event Source Mapping Source Access Configuration Args] - An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
- starting_
position str - The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.
- LATEST - Read only new records.
- TRIM_HORIZON - Process all available records.
- AT_TIMESTAMP - Specify a time from which to start reading records.
- starting_
position_ floattimestamp - With
StartingPosition
set toAT_TIMESTAMP
, the time from which to start reading, in Unix time seconds.StartingPositionTimestamp
cannot be in the future. - Sequence[Tag
Args] A list of tags to add to the event source mapping.
You must have the
lambda:TagResource
,lambda:UntagResource
, andlambda:ListTags
permissions for your IAM principal to manage the AWS CloudFormation stack. If you don't have these permissions, there might be unexpected behavior with stack-level tags propagating to the resource during resource creation and update.- topics Sequence[str]
- The name of the Kafka topic.
- tumbling_
window_ intin_ seconds - (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
- function
Name String The name or ARN of the Lambda function. Name formats
- Function name –
MyFunction
. - Function ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction
. - Version or Alias ARN –
arn:aws:lambda:us-west-2:123456789012:function:MyFunction:PROD
. - Partial ARN –
123456789012:function:MyFunction
.
The length constraint applies only to the full ARN. If you specify only the function name, it's limited to 64 characters in length.
- Function name –
- amazon
Managed Property MapKafka Event Source Config - Specific configuration settings for an Amazon Managed Streaming for Apache Kafka (Amazon MSK) event source.
- batch
Size Number - The maximum number of records in each batch that Lambda pulls from your stream or queue and sends to your function. Lambda passes all of the records in the batch to the function in a single call, up to the payload limit for synchronous invocation (6 MB).
- Amazon Kinesis – Default 100. Max 10,000.
- Amazon DynamoDB Streams – Default 100. Max 10,000.
- Amazon Simple Queue Service – Default 10. For standard queues the max is 10,000. For FIFO queues the max is 10.
- Amazon Managed Streaming for Apache Kafka – Default 100. Max 10,000.
- Self-managed Apache Kafka – Default 100. Max 10,000.
- Amazon MQ (ActiveMQ and RabbitMQ) – Default 100. Max 10,000.
- DocumentDB – Default 100. Max 10,000.
- bisect
Batch BooleanOn Function Error - (Kinesis and DynamoDB Streams only) If the function returns an error, split the batch in two and retry. The default value is false.
- destination
Config Property Map - (Kinesis, DynamoDB Streams, Amazon MSK, and self-managed Apache Kafka event sources only) A configuration object that specifies the destination of an event after Lambda processes it.
- document
Db Property MapEvent Source Config - Specific configuration settings for a DocumentDB event source.
- enabled Boolean
- When true, the event source mapping is active. When false, Lambda pauses polling and invocation. Default: True
- event
Source StringArn - The Amazon Resource Name (ARN) of the event source.
- Amazon Kinesis – The ARN of the data stream or a stream consumer.
- Amazon DynamoDB Streams – The ARN of the stream.
- Amazon Simple Queue Service – The ARN of the queue.
- Amazon Managed Streaming for Apache Kafka – The ARN of the cluster or the ARN of the VPC connection (for cross-account event source mappings).
- Amazon MQ – The ARN of the broker.
- Amazon DocumentDB – The ARN of the DocumentDB change stream.
- filter
Criteria Property Map - An object that defines the filter criteria that determine whether Lambda should process an event. For more information, see Lambda event filtering.
- function
Response List<"ReportTypes Batch Item Failures"> - (Kinesis, DynamoDB Streams, and SQS) A list of current response type enums applied to the event source mapping.
Valid Values:
ReportBatchItemFailures
- kms
Key StringArn - The ARN of the KMSlong (KMS) customer managed key that Lambda uses to encrypt your function's filter criteria.
- maximum
Batching NumberWindow In Seconds - The maximum amount of time, in seconds, that Lambda spends gathering records before invoking the function.
Default (, , event sources): 0
Default (, Kafka, , event sources): 500 ms
Related setting: For SQS event sources, when you set
BatchSize
to a value greater than 10, you must setMaximumBatchingWindowInSeconds
to at least 1. - maximum
Record NumberAge In Seconds - (Kinesis and DynamoDB Streams only) Discard records older than the specified age. The default value is -1, which sets the maximum age to infinite. When the value is set to infinite, Lambda never discards old records. The minimum valid value for maximum record age is 60s. Although values less than 60 and greater than -1 fall within the parameter's absolute range, they are not allowed
- maximum
Retry NumberAttempts - (Kinesis and DynamoDB Streams only) Discard records after the specified number of retries. The default value is -1, which sets the maximum number of retries to infinite. When MaximumRetryAttempts is infinite, Lambda retries failed records until the record expires in the event source.
- parallelization
Factor Number - (Kinesis and DynamoDB Streams only) The number of batches to process concurrently from each shard. The default value is 1.
- queues List<String>
- (Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
- scaling
Config Property Map - (Amazon SQS only) The scaling configuration for the event source. For more information, see Configuring maximum concurrency for Amazon SQS event sources.
- self
Managed Property MapEvent Source - The self-managed Apache Kafka cluster for your event source.
- self
Managed Property MapKafka Event Source Config - Specific configuration settings for a self-managed Apache Kafka event source.
- source
Access List<Property Map>Configurations - An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
- starting
Position String - The position in a stream from which to start reading. Required for Amazon Kinesis and Amazon DynamoDB.
- LATEST - Read only new records.
- TRIM_HORIZON - Process all available records.
- AT_TIMESTAMP - Specify a time from which to start reading records.
- starting
Position NumberTimestamp - With
StartingPosition
set toAT_TIMESTAMP
, the time from which to start reading, in Unix time seconds.StartingPositionTimestamp
cannot be in the future. - List<Property Map>
A list of tags to add to the event source mapping.
You must have the
lambda:TagResource
,lambda:UntagResource
, andlambda:ListTags
permissions for your IAM principal to manage the AWS CloudFormation stack. If you don't have these permissions, there might be unexpected behavior with stack-level tags propagating to the resource during resource creation and update.- topics List<String>
- The name of the Kafka topic.
- tumbling
Window NumberIn Seconds - (Kinesis and DynamoDB Streams only) The duration in seconds of a processing window for DynamoDB and Kinesis Streams event sources. A value of 0 seconds indicates no tumbling window.
Outputs
All input properties are implicitly available as output properties. Additionally, the EventSourceMapping resource produces the following output properties:
- Aws
Id string - The event source mapping's ID.
- Event
Source stringMapping Arn - The Amazon Resource Name (ARN) of the event source mapping.
- Id string
- The provider-assigned unique ID for this managed resource.
- Aws
Id string - The event source mapping's ID.
- Event
Source stringMapping Arn - The Amazon Resource Name (ARN) of the event source mapping.
- Id string
- The provider-assigned unique ID for this managed resource.
- aws
Id String - The event source mapping's ID.
- event
Source StringMapping Arn - The Amazon Resource Name (ARN) of the event source mapping.
- id String
- The provider-assigned unique ID for this managed resource.
- aws
Id string - The event source mapping's ID.
- event
Source stringMapping Arn - The Amazon Resource Name (ARN) of the event source mapping.
- id string
- The provider-assigned unique ID for this managed resource.
- aws_
id str - The event source mapping's ID.
- event_
source_ strmapping_ arn - The Amazon Resource Name (ARN) of the event source mapping.
- id str
- The provider-assigned unique ID for this managed resource.
- aws
Id String - The event source mapping's ID.
- event
Source StringMapping Arn - The Amazon Resource Name (ARN) of the event source mapping.
- id String
- The provider-assigned unique ID for this managed resource.
Supporting Types
EventSourceMappingAmazonManagedKafkaEventSourceConfig, EventSourceMappingAmazonManagedKafkaEventSourceConfigArgs
- Consumer
Group stringId - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
- Consumer
Group stringId - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
- consumer
Group StringId - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
- consumer
Group stringId - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
- consumer_
group_ strid - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
- consumer
Group StringId - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
EventSourceMappingDestinationConfig, EventSourceMappingDestinationConfigArgs
- On
Failure Pulumi.Aws Native. Lambda. Inputs. Event Source Mapping On Failure - The destination configuration for failed invocations.
- On
Failure EventSource Mapping On Failure - The destination configuration for failed invocations.
- on
Failure EventSource Mapping On Failure - The destination configuration for failed invocations.
- on
Failure EventSource Mapping On Failure - The destination configuration for failed invocations.
- on_
failure lambda_.Event Source Mapping On Failure - The destination configuration for failed invocations.
- on
Failure Property Map - The destination configuration for failed invocations.
EventSourceMappingDocumentDbEventSourceConfig, EventSourceMappingDocumentDbEventSourceConfigArgs
- Collection
Name string - The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
- Database
Name string - The name of the database to consume within the DocumentDB cluster.
- Full
Document Pulumi.Aws Native. Lambda. Event Source Mapping Document Db Event Source Config Full Document - Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
- Collection
Name string - The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
- Database
Name string - The name of the database to consume within the DocumentDB cluster.
- Full
Document EventSource Mapping Document Db Event Source Config Full Document - Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
- collection
Name String - The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
- database
Name String - The name of the database to consume within the DocumentDB cluster.
- full
Document EventSource Mapping Document Db Event Source Config Full Document - Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
- collection
Name string - The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
- database
Name string - The name of the database to consume within the DocumentDB cluster.
- full
Document EventSource Mapping Document Db Event Source Config Full Document - Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
- collection_
name str - The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
- database_
name str - The name of the database to consume within the DocumentDB cluster.
- full_
document lambda_.Event Source Mapping Document Db Event Source Config Full Document - Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
- collection
Name String - The name of the collection to consume within the database. If you do not specify a collection, Lambda consumes all collections.
- database
Name String - The name of the database to consume within the DocumentDB cluster.
- full
Document "UpdateLookup" | "Default" - Determines what DocumentDB sends to your event stream during document update operations. If set to UpdateLookup, DocumentDB sends a delta describing the changes, along with a copy of the entire document. Otherwise, DocumentDB sends only a partial document that contains the changes.
EventSourceMappingDocumentDbEventSourceConfigFullDocument, EventSourceMappingDocumentDbEventSourceConfigFullDocumentArgs
- Update
Lookup - UpdateLookup
- Default
- Default
- Event
Source Mapping Document Db Event Source Config Full Document Update Lookup - UpdateLookup
- Event
Source Mapping Document Db Event Source Config Full Document Default - Default
- Update
Lookup - UpdateLookup
- Default
- Default
- Update
Lookup - UpdateLookup
- Default
- Default
- UPDATE_LOOKUP
- UpdateLookup
- DEFAULT
- Default
- "Update
Lookup" - UpdateLookup
- "Default"
- Default
EventSourceMappingEndpoints, EventSourceMappingEndpointsArgs
- Kafka
Bootstrap List<string>Servers - The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
- Kafka
Bootstrap []stringServers - The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
- kafka
Bootstrap List<String>Servers - The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
- kafka
Bootstrap string[]Servers - The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
- kafka_
bootstrap_ Sequence[str]servers - The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
- kafka
Bootstrap List<String>Servers - The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
EventSourceMappingFilter, EventSourceMappingFilterArgs
- Pattern string
- A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.
- Pattern string
- A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.
- pattern String
- A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.
- pattern string
- A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.
- pattern str
- A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.
- pattern String
- A filter pattern. For more information on the syntax of a filter pattern, see Filter rule syntax.
EventSourceMappingFilterCriteria, EventSourceMappingFilterCriteriaArgs
- Filters
List<Pulumi.
Aws Native. Lambda. Inputs. Event Source Mapping Filter> - A list of filters.
- Filters
[]Event
Source Mapping Filter - A list of filters.
- filters
List<Event
Source Mapping Filter> - A list of filters.
- filters
Event
Source Mapping Filter[] - A list of filters.
- filters
Sequence[lambda_.
Event Source Mapping Filter] - A list of filters.
- filters List<Property Map>
- A list of filters.
EventSourceMappingFunctionResponseTypesItem, EventSourceMappingFunctionResponseTypesItemArgs
- Report
Batch Item Failures - ReportBatchItemFailures
- Event
Source Mapping Function Response Types Item Report Batch Item Failures - ReportBatchItemFailures
- Report
Batch Item Failures - ReportBatchItemFailures
- Report
Batch Item Failures - ReportBatchItemFailures
- REPORT_BATCH_ITEM_FAILURES
- ReportBatchItemFailures
- "Report
Batch Item Failures" - ReportBatchItemFailures
EventSourceMappingOnFailure, EventSourceMappingOnFailureArgs
- Destination string
- The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.
- Destination string
- The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.
- destination String
- The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.
- destination string
- The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.
- destination str
- The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.
- destination String
- The Amazon Resource Name (ARN) of the destination resource. To retain records of asynchronous invocations, you can configure an Amazon SNS topic, Amazon SQS queue, Lambda function, or Amazon EventBridge event bus as the destination. To retain records of failed invocations from Kinesis and DynamoDB event sources, you can configure an Amazon SNS topic or Amazon SQS queue as the destination. To retain records of failed invocations from self-managed Kafka or Amazon MSK, you can configure an Amazon SNS topic, Amazon SQS queue, or Amazon S3 bucket as the destination.
EventSourceMappingScalingConfig, EventSourceMappingScalingConfigArgs
- Maximum
Concurrency int - Limits the number of concurrent instances that the SQS event source can invoke.
- Maximum
Concurrency int - Limits the number of concurrent instances that the SQS event source can invoke.
- maximum
Concurrency Integer - Limits the number of concurrent instances that the SQS event source can invoke.
- maximum
Concurrency number - Limits the number of concurrent instances that the SQS event source can invoke.
- maximum_
concurrency int - Limits the number of concurrent instances that the SQS event source can invoke.
- maximum
Concurrency Number - Limits the number of concurrent instances that the SQS event source can invoke.
EventSourceMappingSelfManagedEventSource, EventSourceMappingSelfManagedEventSourceArgs
- Endpoints
Pulumi.
Aws Native. Lambda. Inputs. Event Source Mapping Endpoints - The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
- Endpoints
Event
Source Mapping Endpoints - The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
- endpoints
Event
Source Mapping Endpoints - The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
- endpoints
Event
Source Mapping Endpoints - The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
- endpoints
lambda_.
Event Source Mapping Endpoints - The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
- endpoints Property Map
- The list of bootstrap servers for your Kafka brokers in the following format:
"KafkaBootstrapServers": ["abc.xyz.com:xxxx","abc2.xyz.com:xxxx"]
.
EventSourceMappingSelfManagedKafkaEventSourceConfig, EventSourceMappingSelfManagedKafkaEventSourceConfigArgs
- Consumer
Group stringId - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
- Consumer
Group stringId - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
- consumer
Group StringId - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
- consumer
Group stringId - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
- consumer_
group_ strid - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
- consumer
Group StringId - The identifier for the Kafka consumer group to join. The consumer group ID must be unique among all your Kafka event sources. After creating a Kafka event source mapping with the consumer group ID specified, you cannot update this value. For more information, see Customizable consumer group ID.
EventSourceMappingSourceAccessConfiguration, EventSourceMappingSourceAccessConfigurationArgs
- Type
Pulumi.
Aws Native. Lambda. Event Source Mapping Source Access Configuration Type - The type of authentication protocol, VPC components, or virtual host for your event source. For example:
"Type":"SASL_SCRAM_512_AUTH"
.BASIC_AUTH
– (Amazon MQ) The ASMlong secret that stores your broker credentials.BASIC_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.VPC_SUBNET
– (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.VPC_SECURITY_GROUP
– (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.SASL_SCRAM_256_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.SASL_SCRAM_512_AUTH
– (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.VIRTUAL_HOST
–- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.CLIENT_CERTIFICATE_TLS_AUTH
– (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.SERVER_ROOT_CA_CERTIFICATE
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
- Uri string
- The value for your chosen configuration in
Type
. For example:"URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName"
.
- Type
Event
Source Mapping Source Access Configuration Type - The type of authentication protocol, VPC components, or virtual host for your event source. For example:
"Type":"SASL_SCRAM_512_AUTH"
.BASIC_AUTH
– (Amazon MQ) The ASMlong secret that stores your broker credentials.BASIC_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.VPC_SUBNET
– (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.VPC_SECURITY_GROUP
– (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.SASL_SCRAM_256_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.SASL_SCRAM_512_AUTH
– (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.VIRTUAL_HOST
–- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.CLIENT_CERTIFICATE_TLS_AUTH
– (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.SERVER_ROOT_CA_CERTIFICATE
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
- Uri string
- The value for your chosen configuration in
Type
. For example:"URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName"
.
- type
Event
Source Mapping Source Access Configuration Type - The type of authentication protocol, VPC components, or virtual host for your event source. For example:
"Type":"SASL_SCRAM_512_AUTH"
.BASIC_AUTH
– (Amazon MQ) The ASMlong secret that stores your broker credentials.BASIC_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.VPC_SUBNET
– (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.VPC_SECURITY_GROUP
– (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.SASL_SCRAM_256_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.SASL_SCRAM_512_AUTH
– (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.VIRTUAL_HOST
–- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.CLIENT_CERTIFICATE_TLS_AUTH
– (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.SERVER_ROOT_CA_CERTIFICATE
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
- uri String
- The value for your chosen configuration in
Type
. For example:"URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName"
.
- type
Event
Source Mapping Source Access Configuration Type - The type of authentication protocol, VPC components, or virtual host for your event source. For example:
"Type":"SASL_SCRAM_512_AUTH"
.BASIC_AUTH
– (Amazon MQ) The ASMlong secret that stores your broker credentials.BASIC_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.VPC_SUBNET
– (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.VPC_SECURITY_GROUP
– (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.SASL_SCRAM_256_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.SASL_SCRAM_512_AUTH
– (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.VIRTUAL_HOST
–- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.CLIENT_CERTIFICATE_TLS_AUTH
– (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.SERVER_ROOT_CA_CERTIFICATE
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
- uri string
- The value for your chosen configuration in
Type
. For example:"URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName"
.
- type
lambda_.
Event Source Mapping Source Access Configuration Type - The type of authentication protocol, VPC components, or virtual host for your event source. For example:
"Type":"SASL_SCRAM_512_AUTH"
.BASIC_AUTH
– (Amazon MQ) The ASMlong secret that stores your broker credentials.BASIC_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.VPC_SUBNET
– (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.VPC_SECURITY_GROUP
– (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.SASL_SCRAM_256_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.SASL_SCRAM_512_AUTH
– (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.VIRTUAL_HOST
–- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.CLIENT_CERTIFICATE_TLS_AUTH
– (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.SERVER_ROOT_CA_CERTIFICATE
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
- uri str
- The value for your chosen configuration in
Type
. For example:"URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName"
.
- type "BASIC_AUTH" | "VPC_SUBNET" | "VPC_SECURITY_GROUP" | "SASL_SCRAM_512_AUTH" | "SASL_SCRAM_256_AUTH" | "VIRTUAL_HOST" | "CLIENT_CERTIFICATE_TLS_AUTH" | "SERVER_ROOT_CA_CERTIFICATE"
- The type of authentication protocol, VPC components, or virtual host for your event source. For example:
"Type":"SASL_SCRAM_512_AUTH"
.BASIC_AUTH
– (Amazon MQ) The ASMlong secret that stores your broker credentials.BASIC_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.VPC_SUBNET
– (Self-managed Apache Kafka) The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.VPC_SECURITY_GROUP
– (Self-managed Apache Kafka) The VPC security group used to manage access to your self-managed Apache Kafka brokers.SASL_SCRAM_256_AUTH
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.SASL_SCRAM_512_AUTH
– (Amazon MSK, Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.VIRTUAL_HOST
–- (RabbitMQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source. This property cannot be specified in an UpdateEventSourceMapping API call.CLIENT_CERTIFICATE_TLS_AUTH
– (Amazon MSK, self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the certificate chain (X.509 PEM), private key (PKCS#8 PEM), and private key password (optional) used for mutual TLS authentication of your MSK/Apache Kafka brokers.SERVER_ROOT_CA_CERTIFICATE
– (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key containing the root CA certificate (X.509 PEM) used for TLS encryption of your Apache Kafka brokers.
- uri String
- The value for your chosen configuration in
Type
. For example:"URI": "arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName"
.
EventSourceMappingSourceAccessConfigurationType, EventSourceMappingSourceAccessConfigurationTypeArgs
- Basic
Auth - BASIC_AUTH
- Vpc
Subnet - VPC_SUBNET
- Vpc
Security Group - VPC_SECURITY_GROUP
- Sasl
Scram512Auth - SASL_SCRAM_512_AUTH
- Sasl
Scram256Auth - SASL_SCRAM_256_AUTH
- Virtual
Host - VIRTUAL_HOST
- Client
Certificate Tls Auth - CLIENT_CERTIFICATE_TLS_AUTH
- Server
Root Ca Certificate - SERVER_ROOT_CA_CERTIFICATE
- Event
Source Mapping Source Access Configuration Type Basic Auth - BASIC_AUTH
- Event
Source Mapping Source Access Configuration Type Vpc Subnet - VPC_SUBNET
- Event
Source Mapping Source Access Configuration Type Vpc Security Group - VPC_SECURITY_GROUP
- Event
Source Mapping Source Access Configuration Type Sasl Scram512Auth - SASL_SCRAM_512_AUTH
- Event
Source Mapping Source Access Configuration Type Sasl Scram256Auth - SASL_SCRAM_256_AUTH
- Event
Source Mapping Source Access Configuration Type Virtual Host - VIRTUAL_HOST
- Event
Source Mapping Source Access Configuration Type Client Certificate Tls Auth - CLIENT_CERTIFICATE_TLS_AUTH
- Event
Source Mapping Source Access Configuration Type Server Root Ca Certificate - SERVER_ROOT_CA_CERTIFICATE
- Basic
Auth - BASIC_AUTH
- Vpc
Subnet - VPC_SUBNET
- Vpc
Security Group - VPC_SECURITY_GROUP
- Sasl
Scram512Auth - SASL_SCRAM_512_AUTH
- Sasl
Scram256Auth - SASL_SCRAM_256_AUTH
- Virtual
Host - VIRTUAL_HOST
- Client
Certificate Tls Auth - CLIENT_CERTIFICATE_TLS_AUTH
- Server
Root Ca Certificate - SERVER_ROOT_CA_CERTIFICATE
- Basic
Auth - BASIC_AUTH
- Vpc
Subnet - VPC_SUBNET
- Vpc
Security Group - VPC_SECURITY_GROUP
- Sasl
Scram512Auth - SASL_SCRAM_512_AUTH
- Sasl
Scram256Auth - SASL_SCRAM_256_AUTH
- Virtual
Host - VIRTUAL_HOST
- Client
Certificate Tls Auth - CLIENT_CERTIFICATE_TLS_AUTH
- Server
Root Ca Certificate - SERVER_ROOT_CA_CERTIFICATE
- BASIC_AUTH
- BASIC_AUTH
- VPC_SUBNET
- VPC_SUBNET
- VPC_SECURITY_GROUP
- VPC_SECURITY_GROUP
- SASL_SCRAM512_AUTH
- SASL_SCRAM_512_AUTH
- SASL_SCRAM256_AUTH
- SASL_SCRAM_256_AUTH
- VIRTUAL_HOST
- VIRTUAL_HOST
- CLIENT_CERTIFICATE_TLS_AUTH
- CLIENT_CERTIFICATE_TLS_AUTH
- SERVER_ROOT_CA_CERTIFICATE
- SERVER_ROOT_CA_CERTIFICATE
- "BASIC_AUTH"
- BASIC_AUTH
- "VPC_SUBNET"
- VPC_SUBNET
- "VPC_SECURITY_GROUP"
- VPC_SECURITY_GROUP
- "SASL_SCRAM_512_AUTH"
- SASL_SCRAM_512_AUTH
- "SASL_SCRAM_256_AUTH"
- SASL_SCRAM_256_AUTH
- "VIRTUAL_HOST"
- VIRTUAL_HOST
- "CLIENT_CERTIFICATE_TLS_AUTH"
- CLIENT_CERTIFICATE_TLS_AUTH
- "SERVER_ROOT_CA_CERTIFICATE"
- SERVER_ROOT_CA_CERTIFICATE
Tag, TagArgs
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.