We recommend new projects start with resources from the AWS provider.
aws-native.databrew.Dataset
Explore with Pulumi AI
We recommend new projects start with resources from the AWS provider.
Resource schema for AWS::DataBrew::Dataset.
Example Usage
Example
using System.Collections.Generic;
using System.Linq;
using Pulumi;
using AwsNative = Pulumi.AwsNative;
return await Deployment.RunAsync(() =>
{
var testDataBrewDataset = new AwsNative.DataBrew.Dataset("testDataBrewDataset", new()
{
Name = "cf-test-dataset1",
Input = new AwsNative.DataBrew.Inputs.DatasetInputArgs
{
S3InputDefinition = new AwsNative.DataBrew.Inputs.DatasetS3LocationArgs
{
Bucket = "test-location",
Key = "test.xlsx",
},
},
FormatOptions = new AwsNative.DataBrew.Inputs.DatasetFormatOptionsArgs
{
Excel = new AwsNative.DataBrew.Inputs.DatasetExcelOptionsArgs
{
SheetNames = new[]
{
"test",
},
},
},
Tags = new[]
{
new AwsNative.Inputs.CreateOnlyTagArgs
{
Key = "key00AtCreate",
Value = "value001AtCreate",
},
},
});
});
package main
import (
awsnative "github.com/pulumi/pulumi-aws-native/sdk/go/aws"
"github.com/pulumi/pulumi-aws-native/sdk/go/aws/databrew"
"github.com/pulumi/pulumi/sdk/v3/go/pulumi"
)
func main() {
pulumi.Run(func(ctx *pulumi.Context) error {
_, err := databrew.NewDataset(ctx, "testDataBrewDataset", &databrew.DatasetArgs{
Name: pulumi.String("cf-test-dataset1"),
Input: &databrew.DatasetInputTypeArgs{
S3InputDefinition: &databrew.DatasetS3LocationArgs{
Bucket: pulumi.String("test-location"),
Key: pulumi.String("test.xlsx"),
},
},
FormatOptions: &databrew.DatasetFormatOptionsArgs{
Excel: &databrew.DatasetExcelOptionsArgs{
SheetNames: pulumi.StringArray{
pulumi.String("test"),
},
},
},
Tags: aws.CreateOnlyTagArray{
&aws.CreateOnlyTagArgs{
Key: pulumi.String("key00AtCreate"),
Value: pulumi.String("value001AtCreate"),
},
},
})
if err != nil {
return err
}
return nil
})
}
Coming soon!
import pulumi
import pulumi_aws_native as aws_native
test_data_brew_dataset = aws_native.databrew.Dataset("testDataBrewDataset",
name="cf-test-dataset1",
input={
"s3_input_definition": {
"bucket": "test-location",
"key": "test.xlsx",
},
},
format_options={
"excel": {
"sheet_names": ["test"],
},
},
tags=[{
"key": "key00AtCreate",
"value": "value001AtCreate",
}])
import * as pulumi from "@pulumi/pulumi";
import * as aws_native from "@pulumi/aws-native";
const testDataBrewDataset = new aws_native.databrew.Dataset("testDataBrewDataset", {
name: "cf-test-dataset1",
input: {
s3InputDefinition: {
bucket: "test-location",
key: "test.xlsx",
},
},
formatOptions: {
excel: {
sheetNames: ["test"],
},
},
tags: [{
key: "key00AtCreate",
value: "value001AtCreate",
}],
});
Coming soon!
Create Dataset Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Dataset(name: string, args: DatasetArgs, opts?: CustomResourceOptions);
@overload
def Dataset(resource_name: str,
args: DatasetArgs,
opts: Optional[ResourceOptions] = None)
@overload
def Dataset(resource_name: str,
opts: Optional[ResourceOptions] = None,
input: Optional[DatasetInputArgs] = None,
format: Optional[DatasetFormat] = None,
format_options: Optional[DatasetFormatOptionsArgs] = None,
name: Optional[str] = None,
path_options: Optional[DatasetPathOptionsArgs] = None,
tags: Optional[Sequence[_root_inputs.CreateOnlyTagArgs]] = None)
func NewDataset(ctx *Context, name string, args DatasetArgs, opts ...ResourceOption) (*Dataset, error)
public Dataset(string name, DatasetArgs args, CustomResourceOptions? opts = null)
public Dataset(String name, DatasetArgs args)
public Dataset(String name, DatasetArgs args, CustomResourceOptions options)
type: aws-native:databrew:Dataset
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args DatasetArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args DatasetArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args DatasetArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args DatasetArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args DatasetArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Dataset Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The Dataset resource accepts the following input properties:
- Input
Pulumi.
Aws Native. Data Brew. Inputs. Dataset Input - Input
- Format
Pulumi.
Aws Native. Data Brew. Dataset Format - Dataset format
- Format
Options Pulumi.Aws Native. Data Brew. Inputs. Dataset Format Options - Format options for dataset
- Name string
- Dataset name
- Path
Options Pulumi.Aws Native. Data Brew. Inputs. Dataset Path Options - PathOptions
- List<Pulumi.
Aws Native. Inputs. Create Only Tag> - Metadata tags that have been applied to the dataset.
- Input
Dataset
Input Type Args - Input
- Format
Dataset
Format - Dataset format
- Format
Options DatasetFormat Options Args - Format options for dataset
- Name string
- Dataset name
- Path
Options DatasetPath Options Args - PathOptions
- Create
Only Tag Args - Metadata tags that have been applied to the dataset.
- input
Dataset
Input - Input
- format
Dataset
Format - Dataset format
- format
Options DatasetFormat Options - Format options for dataset
- name String
- Dataset name
- path
Options DatasetPath Options - PathOptions
- List<Create
Only Tag> - Metadata tags that have been applied to the dataset.
- input
Dataset
Input - Input
- format
Dataset
Format - Dataset format
- format
Options DatasetFormat Options - Format options for dataset
- name string
- Dataset name
- path
Options DatasetPath Options - PathOptions
- Create
Only Tag[] - Metadata tags that have been applied to the dataset.
- input
Dataset
Input Args - Input
- format
Dataset
Format - Dataset format
- format_
options DatasetFormat Options Args - Format options for dataset
- name str
- Dataset name
- path_
options DatasetPath Options Args - PathOptions
- Sequence[Create
Only Tag Args] - Metadata tags that have been applied to the dataset.
- input Property Map
- Input
- format "CSV" | "JSON" | "PARQUET" | "EXCEL" | "ORC"
- Dataset format
- format
Options Property Map - Format options for dataset
- name String
- Dataset name
- path
Options Property Map - PathOptions
- List<Property Map>
- Metadata tags that have been applied to the dataset.
Outputs
All input properties are implicitly available as output properties. Additionally, the Dataset resource produces the following output properties:
- Id string
- The provider-assigned unique ID for this managed resource.
- Id string
- The provider-assigned unique ID for this managed resource.
- id String
- The provider-assigned unique ID for this managed resource.
- id string
- The provider-assigned unique ID for this managed resource.
- id str
- The provider-assigned unique ID for this managed resource.
- id String
- The provider-assigned unique ID for this managed resource.
Supporting Types
CreateOnlyTag, CreateOnlyTagArgs
DatasetCsvOptions, DatasetCsvOptionsArgs
- delimiter str
- A single character that specifies the delimiter being used in the CSV file.
- header_
row bool - A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.
DatasetDataCatalogInputDefinition, DatasetDataCatalogInputDefinitionArgs
- Catalog
Id string - Catalog id
- Database
Name string - Database name
- Table
Name string - Table name
- Temp
Directory Pulumi.Aws Native. Data Brew. Inputs. Dataset S3Location - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- Catalog
Id string - Catalog id
- Database
Name string - Database name
- Table
Name string - Table name
- Temp
Directory DatasetS3Location - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- catalog
Id String - Catalog id
- database
Name String - Database name
- table
Name String - Table name
- temp
Directory DatasetS3Location - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- catalog
Id string - Catalog id
- database
Name string - Database name
- table
Name string - Table name
- temp
Directory DatasetS3Location - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- catalog_
id str - Catalog id
- database_
name str - Database name
- table_
name str - Table name
- temp_
directory DatasetS3Location - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- catalog
Id String - Catalog id
- database
Name String - Database name
- table
Name String - Table name
- temp
Directory Property Map - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
DatasetDatabaseInputDefinition, DatasetDatabaseInputDefinitionArgs
- Glue
Connection stringName - Glue connection name
- Database
Table stringName - Database table name
- Query
String string - Custom SQL to run against the provided AWS Glue connection. This SQL will be used as the input for DataBrew projects and jobs.
- Temp
Directory Pulumi.Aws Native. Data Brew. Inputs. Dataset S3Location - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- Glue
Connection stringName - Glue connection name
- Database
Table stringName - Database table name
- Query
String string - Custom SQL to run against the provided AWS Glue connection. This SQL will be used as the input for DataBrew projects and jobs.
- Temp
Directory DatasetS3Location - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- glue
Connection StringName - Glue connection name
- database
Table StringName - Database table name
- query
String String - Custom SQL to run against the provided AWS Glue connection. This SQL will be used as the input for DataBrew projects and jobs.
- temp
Directory DatasetS3Location - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- glue
Connection stringName - Glue connection name
- database
Table stringName - Database table name
- query
String string - Custom SQL to run against the provided AWS Glue connection. This SQL will be used as the input for DataBrew projects and jobs.
- temp
Directory DatasetS3Location - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- glue_
connection_ strname - Glue connection name
- database_
table_ strname - Database table name
- query_
string str - Custom SQL to run against the provided AWS Glue connection. This SQL will be used as the input for DataBrew projects and jobs.
- temp_
directory DatasetS3Location - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
- glue
Connection StringName - Glue connection name
- database
Table StringName - Database table name
- query
String String - Custom SQL to run against the provided AWS Glue connection. This SQL will be used as the input for DataBrew projects and jobs.
- temp
Directory Property Map - An Amazon location that AWS Glue Data Catalog can use as a temporary directory.
DatasetDatetimeOptions, DatasetDatetimeOptionsArgs
- Format string
- Date/time format of a date parameter
- Locale
Code string - Locale code for a date parameter
- Timezone
Offset string - Timezone offset
- Format string
- Date/time format of a date parameter
- Locale
Code string - Locale code for a date parameter
- Timezone
Offset string - Timezone offset
- format String
- Date/time format of a date parameter
- locale
Code String - Locale code for a date parameter
- timezone
Offset String - Timezone offset
- format string
- Date/time format of a date parameter
- locale
Code string - Locale code for a date parameter
- timezone
Offset string - Timezone offset
- format str
- Date/time format of a date parameter
- locale_
code str - Locale code for a date parameter
- timezone_
offset str - Timezone offset
- format String
- Date/time format of a date parameter
- locale
Code String - Locale code for a date parameter
- timezone
Offset String - Timezone offset
DatasetExcelOptions, DatasetExcelOptionsArgs
- Header
Row bool - A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.
- Sheet
Indexes List<int> - One or more sheet numbers in the Excel file that will be included in the dataset.
- Sheet
Names List<string> - One or more named sheets in the Excel file that will be included in the dataset.
- Header
Row bool - A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.
- Sheet
Indexes []int - One or more sheet numbers in the Excel file that will be included in the dataset.
- Sheet
Names []string - One or more named sheets in the Excel file that will be included in the dataset.
- header
Row Boolean - A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.
- sheet
Indexes List<Integer> - One or more sheet numbers in the Excel file that will be included in the dataset.
- sheet
Names List<String> - One or more named sheets in the Excel file that will be included in the dataset.
- header
Row boolean - A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.
- sheet
Indexes number[] - One or more sheet numbers in the Excel file that will be included in the dataset.
- sheet
Names string[] - One or more named sheets in the Excel file that will be included in the dataset.
- header_
row bool - A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.
- sheet_
indexes Sequence[int] - One or more sheet numbers in the Excel file that will be included in the dataset.
- sheet_
names Sequence[str] - One or more named sheets in the Excel file that will be included in the dataset.
- header
Row Boolean - A variable that specifies whether the first row in the file is parsed as the header. If this value is false, column names are auto-generated.
- sheet
Indexes List<Number> - One or more sheet numbers in the Excel file that will be included in the dataset.
- sheet
Names List<String> - One or more named sheets in the Excel file that will be included in the dataset.
DatasetFilesLimit, DatasetFilesLimitArgs
- Max
Files int - Maximum number of files
- Order
Pulumi.
Aws Native. Data Brew. Dataset Files Limit Order - Order
- Ordered
By Pulumi.Aws Native. Data Brew. Dataset Files Limit Ordered By - Ordered by
- Max
Files int - Maximum number of files
- Order
Dataset
Files Limit Order - Order
- Ordered
By DatasetFiles Limit Ordered By - Ordered by
- max
Files Integer - Maximum number of files
- order
Dataset
Files Limit Order - Order
- ordered
By DatasetFiles Limit Ordered By - Ordered by
- max
Files number - Maximum number of files
- order
Dataset
Files Limit Order - Order
- ordered
By DatasetFiles Limit Ordered By - Ordered by
- max_
files int - Maximum number of files
- order
Dataset
Files Limit Order - Order
- ordered_
by DatasetFiles Limit Ordered By - Ordered by
- max
Files Number - Maximum number of files
- order "ASCENDING" | "DESCENDING"
- Order
- ordered
By "LAST_MODIFIED_DATE" - Ordered by
DatasetFilesLimitOrder, DatasetFilesLimitOrderArgs
- Ascending
- ASCENDING
- Descending
- DESCENDING
- Dataset
Files Limit Order Ascending - ASCENDING
- Dataset
Files Limit Order Descending - DESCENDING
- Ascending
- ASCENDING
- Descending
- DESCENDING
- Ascending
- ASCENDING
- Descending
- DESCENDING
- ASCENDING
- ASCENDING
- DESCENDING
- DESCENDING
- "ASCENDING"
- ASCENDING
- "DESCENDING"
- DESCENDING
DatasetFilesLimitOrderedBy, DatasetFilesLimitOrderedByArgs
- Last
Modified Date - LAST_MODIFIED_DATE
- Dataset
Files Limit Ordered By Last Modified Date - LAST_MODIFIED_DATE
- Last
Modified Date - LAST_MODIFIED_DATE
- Last
Modified Date - LAST_MODIFIED_DATE
- LAST_MODIFIED_DATE
- LAST_MODIFIED_DATE
- "LAST_MODIFIED_DATE"
- LAST_MODIFIED_DATE
DatasetFilterExpression, DatasetFilterExpressionArgs
- Expression string
- Filtering expression for a parameter
- Values
Map List<Pulumi.Aws Native. Data Brew. Inputs. Dataset Filter Value> - The map of substitution variable names to their values used in this filter expression.
- Expression string
- Filtering expression for a parameter
- Values
Map []DatasetFilter Value - The map of substitution variable names to their values used in this filter expression.
- expression String
- Filtering expression for a parameter
- values
Map List<DatasetFilter Value> - The map of substitution variable names to their values used in this filter expression.
- expression string
- Filtering expression for a parameter
- values
Map DatasetFilter Value[] - The map of substitution variable names to their values used in this filter expression.
- expression str
- Filtering expression for a parameter
- values_
map Sequence[DatasetFilter Value] - The map of substitution variable names to their values used in this filter expression.
- expression String
- Filtering expression for a parameter
- values
Map List<Property Map> - The map of substitution variable names to their values used in this filter expression.
DatasetFilterValue, DatasetFilterValueArgs
- Value string
- The value to be associated with the substitution variable.
- Value
Reference string - Variable name
- Value string
- The value to be associated with the substitution variable.
- Value
Reference string - Variable name
- value String
- The value to be associated with the substitution variable.
- value
Reference String - Variable name
- value string
- The value to be associated with the substitution variable.
- value
Reference string - Variable name
- value str
- The value to be associated with the substitution variable.
- value_
reference str - Variable name
- value String
- The value to be associated with the substitution variable.
- value
Reference String - Variable name
DatasetFormat, DatasetFormatArgs
- Csv
- CSV
- Json
- JSON
- Parquet
- PARQUET
- Excel
- EXCEL
- Orc
- ORC
- Dataset
Format Csv - CSV
- Dataset
Format Json - JSON
- Dataset
Format Parquet - PARQUET
- Dataset
Format Excel - EXCEL
- Dataset
Format Orc - ORC
- Csv
- CSV
- Json
- JSON
- Parquet
- PARQUET
- Excel
- EXCEL
- Orc
- ORC
- Csv
- CSV
- Json
- JSON
- Parquet
- PARQUET
- Excel
- EXCEL
- Orc
- ORC
- CSV
- CSV
- JSON
- JSON
- PARQUET
- PARQUET
- EXCEL
- EXCEL
- ORC
- ORC
- "CSV"
- CSV
- "JSON"
- JSON
- "PARQUET"
- PARQUET
- "EXCEL"
- EXCEL
- "ORC"
- ORC
DatasetFormatOptions, DatasetFormatOptionsArgs
- Csv
Pulumi.
Aws Native. Data Brew. Inputs. Dataset Csv Options - Options that define how CSV input is to be interpreted by DataBrew.
- Excel
Pulumi.
Aws Native. Data Brew. Inputs. Dataset Excel Options - Options that define how Excel input is to be interpreted by DataBrew.
- Json
Pulumi.
Aws Native. Data Brew. Inputs. Dataset Json Options - Options that define how JSON input is to be interpreted by DataBrew.
- Csv
Dataset
Csv Options - Options that define how CSV input is to be interpreted by DataBrew.
- Excel
Dataset
Excel Options - Options that define how Excel input is to be interpreted by DataBrew.
- Json
Dataset
Json Options - Options that define how JSON input is to be interpreted by DataBrew.
- csv
Dataset
Csv Options - Options that define how CSV input is to be interpreted by DataBrew.
- excel
Dataset
Excel Options - Options that define how Excel input is to be interpreted by DataBrew.
- json
Dataset
Json Options - Options that define how JSON input is to be interpreted by DataBrew.
- csv
Dataset
Csv Options - Options that define how CSV input is to be interpreted by DataBrew.
- excel
Dataset
Excel Options - Options that define how Excel input is to be interpreted by DataBrew.
- json
Dataset
Json Options - Options that define how JSON input is to be interpreted by DataBrew.
- csv
Dataset
Csv Options - Options that define how CSV input is to be interpreted by DataBrew.
- excel
Dataset
Excel Options - Options that define how Excel input is to be interpreted by DataBrew.
- json
Dataset
Json Options - Options that define how JSON input is to be interpreted by DataBrew.
- csv Property Map
- Options that define how CSV input is to be interpreted by DataBrew.
- excel Property Map
- Options that define how Excel input is to be interpreted by DataBrew.
- json Property Map
- Options that define how JSON input is to be interpreted by DataBrew.
DatasetInput, DatasetInputArgs
- Data
Catalog Pulumi.Input Definition Aws Native. Data Brew. Inputs. Dataset Data Catalog Input Definition - The AWS Glue Data Catalog parameters for the data.
- Database
Input Pulumi.Definition Aws Native. Data Brew. Inputs. Dataset Database Input Definition - Connection information for dataset input files stored in a database.
- Metadata
Pulumi.
Aws Native. Data Brew. Inputs. Dataset Metadata - Contains additional resource information needed for specific datasets.
- S3Input
Definition Pulumi.Aws Native. Data Brew. Inputs. Dataset S3Location - The Amazon S3 location where the data is stored.
- Data
Catalog DatasetInput Definition Data Catalog Input Definition - The AWS Glue Data Catalog parameters for the data.
- Database
Input DatasetDefinition Database Input Definition - Connection information for dataset input files stored in a database.
- Metadata
Dataset
Metadata - Contains additional resource information needed for specific datasets.
- S3Input
Definition DatasetS3Location - The Amazon S3 location where the data is stored.
- data
Catalog DatasetInput Definition Data Catalog Input Definition - The AWS Glue Data Catalog parameters for the data.
- database
Input DatasetDefinition Database Input Definition - Connection information for dataset input files stored in a database.
- metadata
Dataset
Metadata - Contains additional resource information needed for specific datasets.
- s3Input
Definition DatasetS3Location - The Amazon S3 location where the data is stored.
- data
Catalog DatasetInput Definition Data Catalog Input Definition - The AWS Glue Data Catalog parameters for the data.
- database
Input DatasetDefinition Database Input Definition - Connection information for dataset input files stored in a database.
- metadata
Dataset
Metadata - Contains additional resource information needed for specific datasets.
- s3Input
Definition DatasetS3Location - The Amazon S3 location where the data is stored.
- data_
catalog_ Datasetinput_ definition Data Catalog Input Definition - The AWS Glue Data Catalog parameters for the data.
- database_
input_ Datasetdefinition Database Input Definition - Connection information for dataset input files stored in a database.
- metadata
Dataset
Metadata - Contains additional resource information needed for specific datasets.
- s3_
input_ Datasetdefinition S3Location - The Amazon S3 location where the data is stored.
- data
Catalog Property MapInput Definition - The AWS Glue Data Catalog parameters for the data.
- database
Input Property MapDefinition - Connection information for dataset input files stored in a database.
- metadata Property Map
- Contains additional resource information needed for specific datasets.
- s3Input
Definition Property Map - The Amazon S3 location where the data is stored.
DatasetJsonOptions, DatasetJsonOptionsArgs
- Multi
Line bool - A value that specifies whether JSON input contains embedded new line characters.
- Multi
Line bool - A value that specifies whether JSON input contains embedded new line characters.
- multi
Line Boolean - A value that specifies whether JSON input contains embedded new line characters.
- multi
Line boolean - A value that specifies whether JSON input contains embedded new line characters.
- multi_
line bool - A value that specifies whether JSON input contains embedded new line characters.
- multi
Line Boolean - A value that specifies whether JSON input contains embedded new line characters.
DatasetMetadata, DatasetMetadataArgs
- Source
Arn string - Arn of the source of the dataset. For e.g.: AppFlow Flow ARN.
- Source
Arn string - Arn of the source of the dataset. For e.g.: AppFlow Flow ARN.
- source
Arn String - Arn of the source of the dataset. For e.g.: AppFlow Flow ARN.
- source
Arn string - Arn of the source of the dataset. For e.g.: AppFlow Flow ARN.
- source_
arn str - Arn of the source of the dataset. For e.g.: AppFlow Flow ARN.
- source
Arn String - Arn of the source of the dataset. For e.g.: AppFlow Flow ARN.
DatasetParameter, DatasetParameterArgs
- Name string
- The name of the parameter that is used in the dataset's Amazon S3 path.
- Type
Pulumi.
Aws Native. Data Brew. Dataset Parameter Type - Parameter type
- Create
Column bool - Add the value of this parameter as a column in a dataset.
- Datetime
Options Pulumi.Aws Native. Data Brew. Inputs. Dataset Datetime Options - Additional parameter options such as a format and a timezone. Required for datetime parameters.
- Filter
Pulumi.
Aws Native. Data Brew. Inputs. Dataset Filter Expression - The optional filter expression structure to apply additional matching criteria to the parameter.
- Name string
- The name of the parameter that is used in the dataset's Amazon S3 path.
- Type
Dataset
Parameter Type - Parameter type
- Create
Column bool - Add the value of this parameter as a column in a dataset.
- Datetime
Options DatasetDatetime Options - Additional parameter options such as a format and a timezone. Required for datetime parameters.
- Filter
Dataset
Filter Expression - The optional filter expression structure to apply additional matching criteria to the parameter.
- name String
- The name of the parameter that is used in the dataset's Amazon S3 path.
- type
Dataset
Parameter Type - Parameter type
- create
Column Boolean - Add the value of this parameter as a column in a dataset.
- datetime
Options DatasetDatetime Options - Additional parameter options such as a format and a timezone. Required for datetime parameters.
- filter
Dataset
Filter Expression - The optional filter expression structure to apply additional matching criteria to the parameter.
- name string
- The name of the parameter that is used in the dataset's Amazon S3 path.
- type
Dataset
Parameter Type - Parameter type
- create
Column boolean - Add the value of this parameter as a column in a dataset.
- datetime
Options DatasetDatetime Options - Additional parameter options such as a format and a timezone. Required for datetime parameters.
- filter
Dataset
Filter Expression - The optional filter expression structure to apply additional matching criteria to the parameter.
- name str
- The name of the parameter that is used in the dataset's Amazon S3 path.
- type
Dataset
Parameter Type - Parameter type
- create_
column bool - Add the value of this parameter as a column in a dataset.
- datetime_
options DatasetDatetime Options - Additional parameter options such as a format and a timezone. Required for datetime parameters.
- filter
Dataset
Filter Expression - The optional filter expression structure to apply additional matching criteria to the parameter.
- name String
- The name of the parameter that is used in the dataset's Amazon S3 path.
- type "String" | "Number" | "Datetime"
- Parameter type
- create
Column Boolean - Add the value of this parameter as a column in a dataset.
- datetime
Options Property Map - Additional parameter options such as a format and a timezone. Required for datetime parameters.
- filter Property Map
- The optional filter expression structure to apply additional matching criteria to the parameter.
DatasetParameterType, DatasetParameterTypeArgs
- String
- String
- Number
- Number
- Datetime
- Datetime
- Dataset
Parameter Type String - String
- Dataset
Parameter Type Number - Number
- Dataset
Parameter Type Datetime - Datetime
- String
- String
- Number
- Number
- Datetime
- Datetime
- String
- String
- Number
- Number
- Datetime
- Datetime
- STRING
- String
- NUMBER
- Number
- DATETIME
- Datetime
- "String"
- String
- "Number"
- Number
- "Datetime"
- Datetime
DatasetPathOptions, DatasetPathOptionsArgs
- Files
Limit Pulumi.Aws Native. Data Brew. Inputs. Dataset Files Limit - If provided, this structure imposes a limit on a number of files that should be selected.
- Last
Modified Pulumi.Date Condition Aws Native. Data Brew. Inputs. Dataset Filter Expression - If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3 .
- Parameters
List<Pulumi.
Aws Native. Data Brew. Inputs. Dataset Path Parameter> - A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.
- Files
Limit DatasetFiles Limit - If provided, this structure imposes a limit on a number of files that should be selected.
- Last
Modified DatasetDate Condition Filter Expression - If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3 .
- Parameters
[]Dataset
Path Parameter - A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.
- files
Limit DatasetFiles Limit - If provided, this structure imposes a limit on a number of files that should be selected.
- last
Modified DatasetDate Condition Filter Expression - If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3 .
- parameters
List<Dataset
Path Parameter> - A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.
- files
Limit DatasetFiles Limit - If provided, this structure imposes a limit on a number of files that should be selected.
- last
Modified DatasetDate Condition Filter Expression - If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3 .
- parameters
Dataset
Path Parameter[] - A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.
- files_
limit DatasetFiles Limit - If provided, this structure imposes a limit on a number of files that should be selected.
- last_
modified_ Datasetdate_ condition Filter Expression - If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3 .
- parameters
Sequence[Dataset
Path Parameter] - A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.
- files
Limit Property Map - If provided, this structure imposes a limit on a number of files that should be selected.
- last
Modified Property MapDate Condition - If provided, this structure defines a date range for matching Amazon S3 objects based on their LastModifiedDate attribute in Amazon S3 .
- parameters List<Property Map>
- A structure that maps names of parameters used in the Amazon S3 path of a dataset to their definitions.
DatasetPathParameter, DatasetPathParameterArgs
- Dataset
Parameter Pulumi.Aws Native. Data Brew. Inputs. Dataset Parameter - The path parameter definition.
- Path
Parameter stringName - The name of the path parameter.
- Dataset
Parameter DatasetParameter - The path parameter definition.
- Path
Parameter stringName - The name of the path parameter.
- dataset
Parameter DatasetParameter - The path parameter definition.
- path
Parameter StringName - The name of the path parameter.
- dataset
Parameter DatasetParameter - The path parameter definition.
- path
Parameter stringName - The name of the path parameter.
- dataset_
parameter DatasetParameter - The path parameter definition.
- path_
parameter_ strname - The name of the path parameter.
- dataset
Parameter Property Map - The path parameter definition.
- path
Parameter StringName - The name of the path parameter.
DatasetS3Location, DatasetS3LocationArgs
Package Details
- Repository
- AWS Native pulumi/pulumi-aws-native
- License
- Apache-2.0
We recommend new projects start with resources from the AWS provider.