Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.bigquery/v2.Routine
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates a new routine in the dataset. Auto-naming is currently not supported for this resource.
Create Routine Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Routine(name: string, args: RoutineArgs, opts?: CustomResourceOptions);
@overload
def Routine(resource_name: str,
args: RoutineArgs,
opts: Optional[ResourceOptions] = None)
@overload
def Routine(resource_name: str,
opts: Optional[ResourceOptions] = None,
routine_reference: Optional[RoutineReferenceArgs] = None,
routine_type: Optional[RoutineRoutineType] = None,
dataset_id: Optional[str] = None,
definition_body: Optional[str] = None,
description: Optional[str] = None,
determinism_level: Optional[RoutineDeterminismLevel] = None,
imported_libraries: Optional[Sequence[str]] = None,
language: Optional[RoutineLanguage] = None,
project: Optional[str] = None,
remote_function_options: Optional[RemoteFunctionOptionsArgs] = None,
return_table_type: Optional[StandardSqlTableTypeArgs] = None,
return_type: Optional[StandardSqlDataTypeArgs] = None,
arguments: Optional[Sequence[ArgumentArgs]] = None,
data_governance_type: Optional[RoutineDataGovernanceType] = None,
security_mode: Optional[RoutineSecurityMode] = None,
spark_options: Optional[SparkOptionsArgs] = None,
strict_mode: Optional[bool] = None)
func NewRoutine(ctx *Context, name string, args RoutineArgs, opts ...ResourceOption) (*Routine, error)
public Routine(string name, RoutineArgs args, CustomResourceOptions? opts = null)
public Routine(String name, RoutineArgs args)
public Routine(String name, RoutineArgs args, CustomResourceOptions options)
type: google-native:bigquery/v2:Routine
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args RoutineArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var routineResource = new GoogleNative.BigQuery.V2.Routine("routineResource", new()
{
RoutineReference = new GoogleNative.BigQuery.V2.Inputs.RoutineReferenceArgs
{
DatasetId = "string",
Project = "string",
RoutineId = "string",
},
RoutineType = GoogleNative.BigQuery.V2.RoutineRoutineType.RoutineTypeUnspecified,
DatasetId = "string",
DefinitionBody = "string",
Description = "string",
DeterminismLevel = GoogleNative.BigQuery.V2.RoutineDeterminismLevel.DeterminismLevelUnspecified,
ImportedLibraries = new[]
{
"string",
},
Language = GoogleNative.BigQuery.V2.RoutineLanguage.LanguageUnspecified,
Project = "string",
RemoteFunctionOptions = new GoogleNative.BigQuery.V2.Inputs.RemoteFunctionOptionsArgs
{
Connection = "string",
Endpoint = "string",
MaxBatchingRows = "string",
UserDefinedContext =
{
{ "string", "string" },
},
},
ReturnTableType = new GoogleNative.BigQuery.V2.Inputs.StandardSqlTableTypeArgs
{
Columns = new[]
{
new GoogleNative.BigQuery.V2.Inputs.StandardSqlFieldArgs
{
Name = "string",
Type = new GoogleNative.BigQuery.V2.Inputs.StandardSqlDataTypeArgs
{
TypeKind = GoogleNative.BigQuery.V2.StandardSqlDataTypeTypeKind.TypeKindUnspecified,
ArrayElementType = standardSqlDataType,
RangeElementType = standardSqlDataType,
StructType = new GoogleNative.BigQuery.V2.Inputs.StandardSqlStructTypeArgs
{
Fields = new[]
{
standardSqlField,
},
},
},
},
},
},
ReturnType = standardSqlDataType,
Arguments = new[]
{
new GoogleNative.BigQuery.V2.Inputs.ArgumentArgs
{
ArgumentKind = GoogleNative.BigQuery.V2.ArgumentArgumentKind.ArgumentKindUnspecified,
DataType = standardSqlDataType,
IsAggregate = false,
Mode = GoogleNative.BigQuery.V2.ArgumentMode.ModeUnspecified,
Name = "string",
},
},
DataGovernanceType = GoogleNative.BigQuery.V2.RoutineDataGovernanceType.DataGovernanceTypeUnspecified,
SecurityMode = GoogleNative.BigQuery.V2.RoutineSecurityMode.SecurityModeUnspecified,
SparkOptions = new GoogleNative.BigQuery.V2.Inputs.SparkOptionsArgs
{
ArchiveUris = new[]
{
"string",
},
Connection = "string",
ContainerImage = "string",
FileUris = new[]
{
"string",
},
JarUris = new[]
{
"string",
},
MainClass = "string",
MainFileUri = "string",
Properties =
{
{ "string", "string" },
},
PyFileUris = new[]
{
"string",
},
RuntimeVersion = "string",
},
StrictMode = false,
});
example, err := bigquery.NewRoutine(ctx, "routineResource", &bigquery.RoutineArgs{
RoutineReference: &bigquery.RoutineReferenceArgs{
DatasetId: pulumi.String("string"),
Project: pulumi.String("string"),
RoutineId: pulumi.String("string"),
},
RoutineType: bigquery.RoutineRoutineTypeRoutineTypeUnspecified,
DatasetId: pulumi.String("string"),
DefinitionBody: pulumi.String("string"),
Description: pulumi.String("string"),
DeterminismLevel: bigquery.RoutineDeterminismLevelDeterminismLevelUnspecified,
ImportedLibraries: pulumi.StringArray{
pulumi.String("string"),
},
Language: bigquery.RoutineLanguageLanguageUnspecified,
Project: pulumi.String("string"),
RemoteFunctionOptions: &bigquery.RemoteFunctionOptionsArgs{
Connection: pulumi.String("string"),
Endpoint: pulumi.String("string"),
MaxBatchingRows: pulumi.String("string"),
UserDefinedContext: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
ReturnTableType: &bigquery.StandardSqlTableTypeArgs{
Columns: bigquery.StandardSqlFieldArray{
&bigquery.StandardSqlFieldArgs{
Name: pulumi.String("string"),
Type: &bigquery.StandardSqlDataTypeArgs{
TypeKind: bigquery.StandardSqlDataTypeTypeKindTypeKindUnspecified,
ArrayElementType: pulumi.Any(standardSqlDataType),
RangeElementType: pulumi.Any(standardSqlDataType),
StructType: &bigquery.StandardSqlStructTypeArgs{
Fields: bigquery.StandardSqlFieldArray{
standardSqlField,
},
},
},
},
},
},
ReturnType: pulumi.Any(standardSqlDataType),
Arguments: bigquery.ArgumentArray{
&bigquery.ArgumentArgs{
ArgumentKind: bigquery.ArgumentArgumentKindArgumentKindUnspecified,
DataType: pulumi.Any(standardSqlDataType),
IsAggregate: pulumi.Bool(false),
Mode: bigquery.ArgumentModeModeUnspecified,
Name: pulumi.String("string"),
},
},
DataGovernanceType: bigquery.RoutineDataGovernanceTypeDataGovernanceTypeUnspecified,
SecurityMode: bigquery.RoutineSecurityModeSecurityModeUnspecified,
SparkOptions: &bigquery.SparkOptionsArgs{
ArchiveUris: pulumi.StringArray{
pulumi.String("string"),
},
Connection: pulumi.String("string"),
ContainerImage: pulumi.String("string"),
FileUris: pulumi.StringArray{
pulumi.String("string"),
},
JarUris: pulumi.StringArray{
pulumi.String("string"),
},
MainClass: pulumi.String("string"),
MainFileUri: pulumi.String("string"),
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
PyFileUris: pulumi.StringArray{
pulumi.String("string"),
},
RuntimeVersion: pulumi.String("string"),
},
StrictMode: pulumi.Bool(false),
})
var routineResource = new Routine("routineResource", RoutineArgs.builder()
.routineReference(RoutineReferenceArgs.builder()
.datasetId("string")
.project("string")
.routineId("string")
.build())
.routineType("ROUTINE_TYPE_UNSPECIFIED")
.datasetId("string")
.definitionBody("string")
.description("string")
.determinismLevel("DETERMINISM_LEVEL_UNSPECIFIED")
.importedLibraries("string")
.language("LANGUAGE_UNSPECIFIED")
.project("string")
.remoteFunctionOptions(RemoteFunctionOptionsArgs.builder()
.connection("string")
.endpoint("string")
.maxBatchingRows("string")
.userDefinedContext(Map.of("string", "string"))
.build())
.returnTableType(StandardSqlTableTypeArgs.builder()
.columns(StandardSqlFieldArgs.builder()
.name("string")
.type(StandardSqlDataTypeArgs.builder()
.typeKind("TYPE_KIND_UNSPECIFIED")
.arrayElementType(standardSqlDataType)
.rangeElementType(standardSqlDataType)
.structType(StandardSqlStructTypeArgs.builder()
.fields(standardSqlField)
.build())
.build())
.build())
.build())
.returnType(standardSqlDataType)
.arguments(ArgumentArgs.builder()
.argumentKind("ARGUMENT_KIND_UNSPECIFIED")
.dataType(standardSqlDataType)
.isAggregate(false)
.mode("MODE_UNSPECIFIED")
.name("string")
.build())
.dataGovernanceType("DATA_GOVERNANCE_TYPE_UNSPECIFIED")
.securityMode("SECURITY_MODE_UNSPECIFIED")
.sparkOptions(SparkOptionsArgs.builder()
.archiveUris("string")
.connection("string")
.containerImage("string")
.fileUris("string")
.jarUris("string")
.mainClass("string")
.mainFileUri("string")
.properties(Map.of("string", "string"))
.pyFileUris("string")
.runtimeVersion("string")
.build())
.strictMode(false)
.build());
routine_resource = google_native.bigquery.v2.Routine("routineResource",
routine_reference=google_native.bigquery.v2.RoutineReferenceArgs(
dataset_id="string",
project="string",
routine_id="string",
),
routine_type=google_native.bigquery.v2.RoutineRoutineType.ROUTINE_TYPE_UNSPECIFIED,
dataset_id="string",
definition_body="string",
description="string",
determinism_level=google_native.bigquery.v2.RoutineDeterminismLevel.DETERMINISM_LEVEL_UNSPECIFIED,
imported_libraries=["string"],
language=google_native.bigquery.v2.RoutineLanguage.LANGUAGE_UNSPECIFIED,
project="string",
remote_function_options=google_native.bigquery.v2.RemoteFunctionOptionsArgs(
connection="string",
endpoint="string",
max_batching_rows="string",
user_defined_context={
"string": "string",
},
),
return_table_type=google_native.bigquery.v2.StandardSqlTableTypeArgs(
columns=[google_native.bigquery.v2.StandardSqlFieldArgs(
name="string",
type=google_native.bigquery.v2.StandardSqlDataTypeArgs(
type_kind=google_native.bigquery.v2.StandardSqlDataTypeTypeKind.TYPE_KIND_UNSPECIFIED,
array_element_type=standard_sql_data_type,
range_element_type=standard_sql_data_type,
struct_type=google_native.bigquery.v2.StandardSqlStructTypeArgs(
fields=[standard_sql_field],
),
),
)],
),
return_type=standard_sql_data_type,
arguments=[google_native.bigquery.v2.ArgumentArgs(
argument_kind=google_native.bigquery.v2.ArgumentArgumentKind.ARGUMENT_KIND_UNSPECIFIED,
data_type=standard_sql_data_type,
is_aggregate=False,
mode=google_native.bigquery.v2.ArgumentMode.MODE_UNSPECIFIED,
name="string",
)],
data_governance_type=google_native.bigquery.v2.RoutineDataGovernanceType.DATA_GOVERNANCE_TYPE_UNSPECIFIED,
security_mode=google_native.bigquery.v2.RoutineSecurityMode.SECURITY_MODE_UNSPECIFIED,
spark_options=google_native.bigquery.v2.SparkOptionsArgs(
archive_uris=["string"],
connection="string",
container_image="string",
file_uris=["string"],
jar_uris=["string"],
main_class="string",
main_file_uri="string",
properties={
"string": "string",
},
py_file_uris=["string"],
runtime_version="string",
),
strict_mode=False)
const routineResource = new google_native.bigquery.v2.Routine("routineResource", {
routineReference: {
datasetId: "string",
project: "string",
routineId: "string",
},
routineType: google_native.bigquery.v2.RoutineRoutineType.RoutineTypeUnspecified,
datasetId: "string",
definitionBody: "string",
description: "string",
determinismLevel: google_native.bigquery.v2.RoutineDeterminismLevel.DeterminismLevelUnspecified,
importedLibraries: ["string"],
language: google_native.bigquery.v2.RoutineLanguage.LanguageUnspecified,
project: "string",
remoteFunctionOptions: {
connection: "string",
endpoint: "string",
maxBatchingRows: "string",
userDefinedContext: {
string: "string",
},
},
returnTableType: {
columns: [{
name: "string",
type: {
typeKind: google_native.bigquery.v2.StandardSqlDataTypeTypeKind.TypeKindUnspecified,
arrayElementType: standardSqlDataType,
rangeElementType: standardSqlDataType,
structType: {
fields: [standardSqlField],
},
},
}],
},
returnType: standardSqlDataType,
arguments: [{
argumentKind: google_native.bigquery.v2.ArgumentArgumentKind.ArgumentKindUnspecified,
dataType: standardSqlDataType,
isAggregate: false,
mode: google_native.bigquery.v2.ArgumentMode.ModeUnspecified,
name: "string",
}],
dataGovernanceType: google_native.bigquery.v2.RoutineDataGovernanceType.DataGovernanceTypeUnspecified,
securityMode: google_native.bigquery.v2.RoutineSecurityMode.SecurityModeUnspecified,
sparkOptions: {
archiveUris: ["string"],
connection: "string",
containerImage: "string",
fileUris: ["string"],
jarUris: ["string"],
mainClass: "string",
mainFileUri: "string",
properties: {
string: "string",
},
pyFileUris: ["string"],
runtimeVersion: "string",
},
strictMode: false,
});
type: google-native:bigquery/v2:Routine
properties:
arguments:
- argumentKind: ARGUMENT_KIND_UNSPECIFIED
dataType: ${standardSqlDataType}
isAggregate: false
mode: MODE_UNSPECIFIED
name: string
dataGovernanceType: DATA_GOVERNANCE_TYPE_UNSPECIFIED
datasetId: string
definitionBody: string
description: string
determinismLevel: DETERMINISM_LEVEL_UNSPECIFIED
importedLibraries:
- string
language: LANGUAGE_UNSPECIFIED
project: string
remoteFunctionOptions:
connection: string
endpoint: string
maxBatchingRows: string
userDefinedContext:
string: string
returnTableType:
columns:
- name: string
type:
arrayElementType: ${standardSqlDataType}
rangeElementType: ${standardSqlDataType}
structType:
fields:
- ${standardSqlField}
typeKind: TYPE_KIND_UNSPECIFIED
returnType: ${standardSqlDataType}
routineReference:
datasetId: string
project: string
routineId: string
routineType: ROUTINE_TYPE_UNSPECIFIED
securityMode: SECURITY_MODE_UNSPECIFIED
sparkOptions:
archiveUris:
- string
connection: string
containerImage: string
fileUris:
- string
jarUris:
- string
mainClass: string
mainFileUri: string
properties:
string: string
pyFileUris:
- string
runtimeVersion: string
strictMode: false
Routine Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The Routine resource accepts the following input properties:
- Dataset
Id string - Definition
Body string - The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks. - Routine
Reference Pulumi.Google Native. Big Query. V2. Inputs. Routine Reference - Reference describing the ID of this routine.
- Routine
Type Pulumi.Google Native. Big Query. V2. Routine Routine Type - The type of routine.
- Arguments
List<Pulumi.
Google Native. Big Query. V2. Inputs. Argument> - Optional.
- Data
Governance Pulumi.Type Google Native. Big Query. V2. Routine Data Governance Type - Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines. - Description string
- Optional. The description of the routine, if defined.
- Determinism
Level Pulumi.Google Native. Big Query. V2. Routine Determinism Level - Optional. The determinism level of the JavaScript UDF, if defined.
- Imported
Libraries List<string> - Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- Language
Pulumi.
Google Native. Big Query. V2. Routine Language - Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- Project string
- Remote
Function Pulumi.Options Google Native. Big Query. V2. Inputs. Remote Function Options - Optional. Remote function specific options.
- Return
Table Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Table Type - Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- Return
Type Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type - Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64. - Security
Mode Pulumi.Google Native. Big Query. V2. Routine Security Mode - Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- Spark
Options Pulumi.Google Native. Big Query. V2. Inputs. Spark Options - Optional. Spark specific options.
- Strict
Mode bool - Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- Dataset
Id string - Definition
Body string - The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks. - Routine
Reference RoutineReference Args - Reference describing the ID of this routine.
- Routine
Type RoutineRoutine Type - The type of routine.
- Arguments
[]Argument
Args - Optional.
- Data
Governance RoutineType Data Governance Type - Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines. - Description string
- Optional. The description of the routine, if defined.
- Determinism
Level RoutineDeterminism Level - Optional. The determinism level of the JavaScript UDF, if defined.
- Imported
Libraries []string - Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- Language
Routine
Language - Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- Project string
- Remote
Function RemoteOptions Function Options Args - Optional. Remote function specific options.
- Return
Table StandardType Sql Table Type Args - Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- Return
Type StandardSql Data Type Args - Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64. - Security
Mode RoutineSecurity Mode - Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- Spark
Options SparkOptions Args - Optional. Spark specific options.
- Strict
Mode bool - Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- dataset
Id String - definition
Body String - The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks. - routine
Reference RoutineReference - Reference describing the ID of this routine.
- routine
Type RoutineRoutine Type - The type of routine.
- arguments List<Argument>
- Optional.
- data
Governance RoutineType Data Governance Type - Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines. - description String
- Optional. The description of the routine, if defined.
- determinism
Level RoutineDeterminism Level - Optional. The determinism level of the JavaScript UDF, if defined.
- imported
Libraries List<String> - Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language
Routine
Language - Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project String
- remote
Function RemoteOptions Function Options - Optional. Remote function specific options.
- return
Table StandardType Sql Table Type - Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- return
Type StandardSql Data Type - Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64. - security
Mode RoutineSecurity Mode - Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- spark
Options SparkOptions - Optional. Spark specific options.
- strict
Mode Boolean - Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- dataset
Id string - definition
Body string - The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks. - routine
Reference RoutineReference - Reference describing the ID of this routine.
- routine
Type RoutineRoutine Type - The type of routine.
- arguments Argument[]
- Optional.
- data
Governance RoutineType Data Governance Type - Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines. - description string
- Optional. The description of the routine, if defined.
- determinism
Level RoutineDeterminism Level - Optional. The determinism level of the JavaScript UDF, if defined.
- imported
Libraries string[] - Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language
Routine
Language - Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project string
- remote
Function RemoteOptions Function Options - Optional. Remote function specific options.
- return
Table StandardType Sql Table Type - Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- return
Type StandardSql Data Type - Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64. - security
Mode RoutineSecurity Mode - Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- spark
Options SparkOptions - Optional. Spark specific options.
- strict
Mode boolean - Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- dataset_
id str - definition_
body str - The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks. - routine_
reference RoutineReference Args - Reference describing the ID of this routine.
- routine_
type RoutineRoutine Type - The type of routine.
- arguments
Sequence[Argument
Args] - Optional.
- data_
governance_ Routinetype Data Governance Type - Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines. - description str
- Optional. The description of the routine, if defined.
- determinism_
level RoutineDeterminism Level - Optional. The determinism level of the JavaScript UDF, if defined.
- imported_
libraries Sequence[str] - Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language
Routine
Language - Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project str
- remote_
function_ Remoteoptions Function Options Args - Optional. Remote function specific options.
- return_
table_ Standardtype Sql Table Type Args - Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- return_
type StandardSql Data Type Args - Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64. - security_
mode RoutineSecurity Mode - Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- spark_
options SparkOptions Args - Optional. Spark specific options.
- strict_
mode bool - Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
- dataset
Id String - definition
Body String - The body of the routine. For functions, this is the expression in the AS clause. If language=SQL, it is the substring inside (but excluding) the parentheses. For example, for the function created with the following statement:
CREATE FUNCTION JoinLines(x string, y string) as (concat(x, "\n", y))
The definition_body isconcat(x, "\n", y)
(\n is not replaced with linebreak). If language=JAVASCRIPT, it is the evaluated string in the AS clause. For example, for the function created with the following statement:CREATE FUNCTION f() RETURNS STRING LANGUAGE js AS 'return "\n";\n'
The definition_body isreturn "\n";\n
Note that both \n are replaced with linebreaks. - routine
Reference Property Map - Reference describing the ID of this routine.
- routine
Type "ROUTINE_TYPE_UNSPECIFIED" | "SCALAR_FUNCTION" | "PROCEDURE" | "TABLE_VALUED_FUNCTION" | "AGGREGATE_FUNCTION" - The type of routine.
- arguments List<Property Map>
- Optional.
- data
Governance "DATA_GOVERNANCE_TYPE_UNSPECIFIED" | "DATA_MASKING"Type - Optional. If set to
DATA_MASKING
, the function is validated and made available as a masking function. For more information, see Create custom masking routines. - description String
- Optional. The description of the routine, if defined.
- determinism
Level "DETERMINISM_LEVEL_UNSPECIFIED" | "DETERMINISTIC" | "NOT_DETERMINISTIC" - Optional. The determinism level of the JavaScript UDF, if defined.
- imported
Libraries List<String> - Optional. If language = "JAVASCRIPT", this field stores the path of the imported JAVASCRIPT libraries.
- language "LANGUAGE_UNSPECIFIED" | "SQL" | "JAVASCRIPT" | "PYTHON" | "JAVA" | "SCALA"
- Optional. Defaults to "SQL" if remote_function_options field is absent, not set otherwise.
- project String
- remote
Function Property MapOptions - Optional. Remote function specific options.
- return
Table Property MapType - Optional. Can be set only if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return table type is inferred from definition_body at query time in each query that references this routine. If present, then the columns in the evaluated table result will be cast to match the column types specified in return table type, at query time.
- return
Type Property Map - Optional if language = "SQL"; required otherwise. Cannot be set if routine_type = "TABLE_VALUED_FUNCTION". If absent, the return type is inferred from definition_body at query time in each query that references this routine. If present, then the evaluated result will be cast to the specified returned type at query time. For example, for the functions created with the following statements: *
CREATE FUNCTION Add(x FLOAT64, y FLOAT64) RETURNS FLOAT64 AS (x + y);
*CREATE FUNCTION Increment(x FLOAT64) AS (Add(x, 1));
*CREATE FUNCTION Decrement(x FLOAT64) RETURNS FLOAT64 AS (Add(x, -1));
The return_type is{type_kind: "FLOAT64"}
forAdd
andDecrement
, and is absent forIncrement
(inferred as FLOAT64 at query time). Suppose the functionAdd
is replaced byCREATE OR REPLACE FUNCTION Add(x INT64, y INT64) AS (x + y);
Then the inferred return type ofIncrement
is automatically changed to INT64 at query time, while the return type ofDecrement
remains FLOAT64. - security
Mode "SECURITY_MODE_UNSPECIFIED" | "DEFINER" | "INVOKER" - Optional. The security mode of the routine, if defined. If not defined, the security mode is automatically determined from the routine's configuration.
- spark
Options Property Map - Optional. Spark specific options.
- strict
Mode Boolean - Optional. Can be set for procedures only. If true (default), the definition body will be validated in the creation and the updates of the procedure. For procedures with an argument of ANY TYPE, the definition body validtion is not supported at creation/update time, and thus this field must be set to false explicitly.
Outputs
All input properties are implicitly available as output properties. Additionally, the Routine resource produces the following output properties:
- Creation
Time string - The time when this routine was created, in milliseconds since the epoch.
- Etag string
- A hash of this resource.
- Id string
- The provider-assigned unique ID for this managed resource.
- Last
Modified stringTime - The time when this routine was last modified, in milliseconds since the epoch.
- Creation
Time string - The time when this routine was created, in milliseconds since the epoch.
- Etag string
- A hash of this resource.
- Id string
- The provider-assigned unique ID for this managed resource.
- Last
Modified stringTime - The time when this routine was last modified, in milliseconds since the epoch.
- creation
Time String - The time when this routine was created, in milliseconds since the epoch.
- etag String
- A hash of this resource.
- id String
- The provider-assigned unique ID for this managed resource.
- last
Modified StringTime - The time when this routine was last modified, in milliseconds since the epoch.
- creation
Time string - The time when this routine was created, in milliseconds since the epoch.
- etag string
- A hash of this resource.
- id string
- The provider-assigned unique ID for this managed resource.
- last
Modified stringTime - The time when this routine was last modified, in milliseconds since the epoch.
- creation_
time str - The time when this routine was created, in milliseconds since the epoch.
- etag str
- A hash of this resource.
- id str
- The provider-assigned unique ID for this managed resource.
- last_
modified_ strtime - The time when this routine was last modified, in milliseconds since the epoch.
- creation
Time String - The time when this routine was created, in milliseconds since the epoch.
- etag String
- A hash of this resource.
- id String
- The provider-assigned unique ID for this managed resource.
- last
Modified StringTime - The time when this routine was last modified, in milliseconds since the epoch.
Supporting Types
Argument, ArgumentArgs
- Argument
Kind Pulumi.Google Native. Big Query. V2. Argument Argument Kind - Optional. Defaults to FIXED_TYPE.
- Data
Type Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type - Required unless argument_kind = ANY_TYPE.
- Is
Aggregate bool - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode
Pulumi.
Google Native. Big Query. V2. Argument Mode - Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
- Optional. The name of this argument. Can be absent for function return argument.
- Argument
Kind ArgumentArgument Kind - Optional. Defaults to FIXED_TYPE.
- Data
Type StandardSql Data Type - Required unless argument_kind = ANY_TYPE.
- Is
Aggregate bool - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode
Argument
Mode - Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
- Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind ArgumentArgument Kind - Optional. Defaults to FIXED_TYPE.
- data
Type StandardSql Data Type - Required unless argument_kind = ANY_TYPE.
- is
Aggregate Boolean - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode
Argument
Mode - Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
- Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind ArgumentArgument Kind - Optional. Defaults to FIXED_TYPE.
- data
Type StandardSql Data Type - Required unless argument_kind = ANY_TYPE.
- is
Aggregate boolean - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode
Argument
Mode - Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name string
- Optional. The name of this argument. Can be absent for function return argument.
- argument_
kind ArgumentArgument Kind - Optional. Defaults to FIXED_TYPE.
- data_
type StandardSql Data Type - Required unless argument_kind = ANY_TYPE.
- is_
aggregate bool - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode
Argument
Mode - Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name str
- Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind "ARGUMENT_KIND_UNSPECIFIED" | "FIXED_TYPE" | "ANY_TYPE" - Optional. Defaults to FIXED_TYPE.
- data
Type Property Map - Required unless argument_kind = ANY_TYPE.
- is
Aggregate Boolean - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode "MODE_UNSPECIFIED" | "IN" | "OUT" | "INOUT"
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
- Optional. The name of this argument. Can be absent for function return argument.
ArgumentArgumentKind, ArgumentArgumentKindArgs
- Argument
Kind Unspecified - ARGUMENT_KIND_UNSPECIFIEDDefault value.
- Fixed
Type - FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- Any
Type - ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- Argument
Argument Kind Argument Kind Unspecified - ARGUMENT_KIND_UNSPECIFIEDDefault value.
- Argument
Argument Kind Fixed Type - FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- Argument
Argument Kind Any Type - ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- Argument
Kind Unspecified - ARGUMENT_KIND_UNSPECIFIEDDefault value.
- Fixed
Type - FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- Any
Type - ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- Argument
Kind Unspecified - ARGUMENT_KIND_UNSPECIFIEDDefault value.
- Fixed
Type - FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- Any
Type - ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- ARGUMENT_KIND_UNSPECIFIED
- ARGUMENT_KIND_UNSPECIFIEDDefault value.
- FIXED_TYPE
- FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- ANY_TYPE
- ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
- "ARGUMENT_KIND_UNSPECIFIED"
- ARGUMENT_KIND_UNSPECIFIEDDefault value.
- "FIXED_TYPE"
- FIXED_TYPEThe argument is a variable with fully specified type, which can be a struct or an array, but not a table.
- "ANY_TYPE"
- ANY_TYPEThe argument is any type, including struct or array, but not a table. To be added: FIXED_TABLE, ANY_TABLE
ArgumentMode, ArgumentModeArgs
- Mode
Unspecified - MODE_UNSPECIFIEDDefault value.
- In
- INThe argument is input-only.
- Out
- OUTThe argument is output-only.
- Inout
- INOUTThe argument is both an input and an output.
- Argument
Mode Mode Unspecified - MODE_UNSPECIFIEDDefault value.
- Argument
Mode In - INThe argument is input-only.
- Argument
Mode Out - OUTThe argument is output-only.
- Argument
Mode Inout - INOUTThe argument is both an input and an output.
- Mode
Unspecified - MODE_UNSPECIFIEDDefault value.
- In
- INThe argument is input-only.
- Out
- OUTThe argument is output-only.
- Inout
- INOUTThe argument is both an input and an output.
- Mode
Unspecified - MODE_UNSPECIFIEDDefault value.
- In
- INThe argument is input-only.
- Out
- OUTThe argument is output-only.
- Inout
- INOUTThe argument is both an input and an output.
- MODE_UNSPECIFIED
- MODE_UNSPECIFIEDDefault value.
- IN_
- INThe argument is input-only.
- OUT
- OUTThe argument is output-only.
- INOUT
- INOUTThe argument is both an input and an output.
- "MODE_UNSPECIFIED"
- MODE_UNSPECIFIEDDefault value.
- "IN"
- INThe argument is input-only.
- "OUT"
- OUTThe argument is output-only.
- "INOUT"
- INOUTThe argument is both an input and an output.
ArgumentResponse, ArgumentResponseArgs
- Argument
Kind string - Optional. Defaults to FIXED_TYPE.
- Data
Type Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response - Required unless argument_kind = ANY_TYPE.
- Is
Aggregate bool - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode string
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
- Optional. The name of this argument. Can be absent for function return argument.
- Argument
Kind string - Optional. Defaults to FIXED_TYPE.
- Data
Type StandardSql Data Type Response - Required unless argument_kind = ANY_TYPE.
- Is
Aggregate bool - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- Mode string
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- Name string
- Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind String - Optional. Defaults to FIXED_TYPE.
- data
Type StandardSql Data Type Response - Required unless argument_kind = ANY_TYPE.
- is
Aggregate Boolean - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode String
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
- Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind string - Optional. Defaults to FIXED_TYPE.
- data
Type StandardSql Data Type Response - Required unless argument_kind = ANY_TYPE.
- is
Aggregate boolean - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode string
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name string
- Optional. The name of this argument. Can be absent for function return argument.
- argument_
kind str - Optional. Defaults to FIXED_TYPE.
- data_
type StandardSql Data Type Response - Required unless argument_kind = ANY_TYPE.
- is_
aggregate bool - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode str
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name str
- Optional. The name of this argument. Can be absent for function return argument.
- argument
Kind String - Optional. Defaults to FIXED_TYPE.
- data
Type Property Map - Required unless argument_kind = ANY_TYPE.
- is
Aggregate Boolean - Optional. Whether the argument is an aggregate function parameter. Must be Unset for routine types other than AGGREGATE_FUNCTION. For AGGREGATE_FUNCTION, if set to false, it is equivalent to adding "NOT AGGREGATE" clause in DDL; Otherwise, it is equivalent to omitting "NOT AGGREGATE" clause in DDL.
- mode String
- Optional. Specifies whether the argument is input or output. Can be set for procedures only.
- name String
- Optional. The name of this argument. Can be absent for function return argument.
RemoteFunctionOptions, RemoteFunctionOptionsArgs
- Connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- Max
Batching stringRows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- User
Defined Dictionary<string, string>Context - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- Connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- Max
Batching stringRows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- User
Defined map[string]stringContext - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching StringRows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined Map<String,String>Context - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint string
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching stringRows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined {[key: string]: string}Context - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection str
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint str
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max_
batching_ strrows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user_
defined_ Mapping[str, str]context - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching StringRows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined Map<String>Context - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
RemoteFunctionOptionsResponse, RemoteFunctionOptionsResponseArgs
- Connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- Max
Batching stringRows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- User
Defined Dictionary<string, string>Context - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- Connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- Endpoint string
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- Max
Batching stringRows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- User
Defined map[string]stringContext - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching StringRows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined Map<String,String>Context - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection string
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint string
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching stringRows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined {[key: string]: string}Context - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection str
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint str
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max_
batching_ strrows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user_
defined_ Mapping[str, str]context - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
- connection String
- Fully qualified name of the user-provided connection object which holds the authentication information to send requests to the remote service. Format:
"projects/{projectId}/locations/{locationId}/connections/{connectionId}"
- endpoint String
- Endpoint of the user-provided remote service, e.g.
https://us-east1-my_gcf_project.cloudfunctions.net/remote_add
- max
Batching StringRows - Max number of rows in each batch sent to the remote service. If absent or if 0, BigQuery dynamically decides the number of rows in a batch.
- user
Defined Map<String>Context - User-defined context as a set of key/value pairs, which will be sent as function invocation context together with batched arguments in the requests to the remote service. The total number of bytes of keys and values must be less than 8KB.
RoutineDataGovernanceType, RoutineDataGovernanceTypeArgs
- Data
Governance Type Unspecified - DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- Data
Masking - DATA_MASKINGThe data governance type is data masking.
- Routine
Data Governance Type Data Governance Type Unspecified - DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- Routine
Data Governance Type Data Masking - DATA_MASKINGThe data governance type is data masking.
- Data
Governance Type Unspecified - DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- Data
Masking - DATA_MASKINGThe data governance type is data masking.
- Data
Governance Type Unspecified - DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- Data
Masking - DATA_MASKINGThe data governance type is data masking.
- DATA_GOVERNANCE_TYPE_UNSPECIFIED
- DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- DATA_MASKING
- DATA_MASKINGThe data governance type is data masking.
- "DATA_GOVERNANCE_TYPE_UNSPECIFIED"
- DATA_GOVERNANCE_TYPE_UNSPECIFIEDThe data governance type is unspecified.
- "DATA_MASKING"
- DATA_MASKINGThe data governance type is data masking.
RoutineDeterminismLevel, RoutineDeterminismLevelArgs
- Determinism
Level Unspecified - DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- Deterministic
- DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- Not
Deterministic - NOT_DETERMINISTICThe UDF is not deterministic.
- Routine
Determinism Level Determinism Level Unspecified - DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- Routine
Determinism Level Deterministic - DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- Routine
Determinism Level Not Deterministic - NOT_DETERMINISTICThe UDF is not deterministic.
- Determinism
Level Unspecified - DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- Deterministic
- DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- Not
Deterministic - NOT_DETERMINISTICThe UDF is not deterministic.
- Determinism
Level Unspecified - DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- Deterministic
- DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- Not
Deterministic - NOT_DETERMINISTICThe UDF is not deterministic.
- DETERMINISM_LEVEL_UNSPECIFIED
- DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- DETERMINISTIC
- DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- NOT_DETERMINISTIC
- NOT_DETERMINISTICThe UDF is not deterministic.
- "DETERMINISM_LEVEL_UNSPECIFIED"
- DETERMINISM_LEVEL_UNSPECIFIEDThe determinism of the UDF is unspecified.
- "DETERMINISTIC"
- DETERMINISTICThe UDF is deterministic, meaning that 2 function calls with the same inputs always produce the same result, even across 2 query runs.
- "NOT_DETERMINISTIC"
- NOT_DETERMINISTICThe UDF is not deterministic.
RoutineLanguage, RoutineLanguageArgs
- Language
Unspecified - LANGUAGE_UNSPECIFIEDDefault value.
- Sql
- SQLSQL language.
- Javascript
- JAVASCRIPTJavaScript language.
- Python
- PYTHONPython language.
- Java
- JAVAJava language.
- Scala
- SCALAScala language.
- Routine
Language Language Unspecified - LANGUAGE_UNSPECIFIEDDefault value.
- Routine
Language Sql - SQLSQL language.
- Routine
Language Javascript - JAVASCRIPTJavaScript language.
- Routine
Language Python - PYTHONPython language.
- Routine
Language Java - JAVAJava language.
- Routine
Language Scala - SCALAScala language.
- Language
Unspecified - LANGUAGE_UNSPECIFIEDDefault value.
- Sql
- SQLSQL language.
- Javascript
- JAVASCRIPTJavaScript language.
- Python
- PYTHONPython language.
- Java
- JAVAJava language.
- Scala
- SCALAScala language.
- Language
Unspecified - LANGUAGE_UNSPECIFIEDDefault value.
- Sql
- SQLSQL language.
- Javascript
- JAVASCRIPTJavaScript language.
- Python
- PYTHONPython language.
- Java
- JAVAJava language.
- Scala
- SCALAScala language.
- LANGUAGE_UNSPECIFIED
- LANGUAGE_UNSPECIFIEDDefault value.
- SQL
- SQLSQL language.
- JAVASCRIPT
- JAVASCRIPTJavaScript language.
- PYTHON
- PYTHONPython language.
- JAVA
- JAVAJava language.
- SCALA
- SCALAScala language.
- "LANGUAGE_UNSPECIFIED"
- LANGUAGE_UNSPECIFIEDDefault value.
- "SQL"
- SQLSQL language.
- "JAVASCRIPT"
- JAVASCRIPTJavaScript language.
- "PYTHON"
- PYTHONPython language.
- "JAVA"
- JAVAJava language.
- "SCALA"
- SCALAScala language.
RoutineReference, RoutineReferenceArgs
- dataset_
id str - The ID of the dataset containing this routine.
- project str
- The ID of the project containing this routine.
- routine_
id str - The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
RoutineReferenceResponse, RoutineReferenceResponseArgs
- dataset_
id str - The ID of the dataset containing this routine.
- project str
- The ID of the project containing this routine.
- routine_
id str - The ID of the routine. The ID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). The maximum length is 256 characters.
RoutineRoutineType, RoutineRoutineTypeArgs
- Routine
Type Unspecified - ROUTINE_TYPE_UNSPECIFIEDDefault value.
- Scalar
Function - SCALAR_FUNCTIONNon-built-in persistent scalar function.
- Procedure
- PROCEDUREStored procedure.
- Table
Valued Function - TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- Aggregate
Function - AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
- Routine
Routine Type Routine Type Unspecified - ROUTINE_TYPE_UNSPECIFIEDDefault value.
- Routine
Routine Type Scalar Function - SCALAR_FUNCTIONNon-built-in persistent scalar function.
- Routine
Routine Type Procedure - PROCEDUREStored procedure.
- Routine
Routine Type Table Valued Function - TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- Routine
Routine Type Aggregate Function - AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
- Routine
Type Unspecified - ROUTINE_TYPE_UNSPECIFIEDDefault value.
- Scalar
Function - SCALAR_FUNCTIONNon-built-in persistent scalar function.
- Procedure
- PROCEDUREStored procedure.
- Table
Valued Function - TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- Aggregate
Function - AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
- Routine
Type Unspecified - ROUTINE_TYPE_UNSPECIFIEDDefault value.
- Scalar
Function - SCALAR_FUNCTIONNon-built-in persistent scalar function.
- Procedure
- PROCEDUREStored procedure.
- Table
Valued Function - TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- Aggregate
Function - AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
- ROUTINE_TYPE_UNSPECIFIED
- ROUTINE_TYPE_UNSPECIFIEDDefault value.
- SCALAR_FUNCTION
- SCALAR_FUNCTIONNon-built-in persistent scalar function.
- PROCEDURE
- PROCEDUREStored procedure.
- TABLE_VALUED_FUNCTION
- TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- AGGREGATE_FUNCTION
- AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
- "ROUTINE_TYPE_UNSPECIFIED"
- ROUTINE_TYPE_UNSPECIFIEDDefault value.
- "SCALAR_FUNCTION"
- SCALAR_FUNCTIONNon-built-in persistent scalar function.
- "PROCEDURE"
- PROCEDUREStored procedure.
- "TABLE_VALUED_FUNCTION"
- TABLE_VALUED_FUNCTIONNon-built-in persistent TVF.
- "AGGREGATE_FUNCTION"
- AGGREGATE_FUNCTIONNon-built-in persistent aggregate function.
RoutineSecurityMode, RoutineSecurityModeArgs
- Security
Mode Unspecified - SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- Definer
- DEFINERThe routine is to be executed with the privileges of the user who defines it.
- Invoker
- INVOKERThe routine is to be executed with the privileges of the user who invokes it.
- Routine
Security Mode Security Mode Unspecified - SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- Routine
Security Mode Definer - DEFINERThe routine is to be executed with the privileges of the user who defines it.
- Routine
Security Mode Invoker - INVOKERThe routine is to be executed with the privileges of the user who invokes it.
- Security
Mode Unspecified - SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- Definer
- DEFINERThe routine is to be executed with the privileges of the user who defines it.
- Invoker
- INVOKERThe routine is to be executed with the privileges of the user who invokes it.
- Security
Mode Unspecified - SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- Definer
- DEFINERThe routine is to be executed with the privileges of the user who defines it.
- Invoker
- INVOKERThe routine is to be executed with the privileges of the user who invokes it.
- SECURITY_MODE_UNSPECIFIED
- SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- DEFINER
- DEFINERThe routine is to be executed with the privileges of the user who defines it.
- INVOKER
- INVOKERThe routine is to be executed with the privileges of the user who invokes it.
- "SECURITY_MODE_UNSPECIFIED"
- SECURITY_MODE_UNSPECIFIEDThe security mode of the routine is unspecified.
- "DEFINER"
- DEFINERThe routine is to be executed with the privileges of the user who defines it.
- "INVOKER"
- INVOKERThe routine is to be executed with the privileges of the user who invokes it.
SparkOptions, SparkOptionsArgs
- Archive
Uris List<string> - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- Container
Image string - Custom container image for the runtime environment.
- File
Uris List<string> - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Jar
Uris List<string> - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- Main
Class string - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- Main
File stringUri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties Dictionary<string, string>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- Py
File List<string>Uris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - Runtime
Version string - Runtime version. If not specified, the default runtime version is used.
- Archive
Uris []string - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- Container
Image string - Custom container image for the runtime environment.
- File
Uris []string - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Jar
Uris []string - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- Main
Class string - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- Main
File stringUri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties map[string]string
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- Py
File []stringUris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - Runtime
Version string - Runtime version. If not specified, the default runtime version is used.
- archive
Uris List<String> - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image String - Custom container image for the runtime environment.
- file
Uris List<String> - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris List<String> - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class String - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File StringUri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String,String>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File List<String>Uris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - runtime
Version String - Runtime version. If not specified, the default runtime version is used.
- archive
Uris string[] - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection string
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image string - Custom container image for the runtime environment.
- file
Uris string[] - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris string[] - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class string - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File stringUri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties {[key: string]: string}
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File string[]Uris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - runtime
Version string - Runtime version. If not specified, the default runtime version is used.
- archive_
uris Sequence[str] - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection str
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container_
image str - Custom container image for the runtime environment.
- file_
uris Sequence[str] - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar_
uris Sequence[str] - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main_
class str - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main_
file_ struri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Mapping[str, str]
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py_
file_ Sequence[str]uris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - runtime_
version str - Runtime version. If not specified, the default runtime version is used.
- archive
Uris List<String> - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image String - Custom container image for the runtime environment.
- file
Uris List<String> - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris List<String> - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class String - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File StringUri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File List<String>Uris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - runtime
Version String - Runtime version. If not specified, the default runtime version is used.
SparkOptionsResponse, SparkOptionsResponseArgs
- Archive
Uris List<string> - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- Container
Image string - Custom container image for the runtime environment.
- File
Uris List<string> - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Jar
Uris List<string> - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- Main
Class string - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- Main
File stringUri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties Dictionary<string, string>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- Py
File List<string>Uris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - Runtime
Version string - Runtime version. If not specified, the default runtime version is used.
- Archive
Uris []string - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Connection string
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- Container
Image string - Custom container image for the runtime environment.
- File
Uris []string - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- Jar
Uris []string - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- Main
Class string - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- Main
File stringUri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- Properties map[string]string
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- Py
File []stringUris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - Runtime
Version string - Runtime version. If not specified, the default runtime version is used.
- archive
Uris List<String> - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image String - Custom container image for the runtime environment.
- file
Uris List<String> - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris List<String> - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class String - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File StringUri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String,String>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File List<String>Uris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - runtime
Version String - Runtime version. If not specified, the default runtime version is used.
- archive
Uris string[] - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection string
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image string - Custom container image for the runtime environment.
- file
Uris string[] - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris string[] - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class string - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File stringUri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties {[key: string]: string}
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File string[]Uris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - runtime
Version string - Runtime version. If not specified, the default runtime version is used.
- archive_
uris Sequence[str] - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection str
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container_
image str - Custom container image for the runtime environment.
- file_
uris Sequence[str] - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar_
uris Sequence[str] - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main_
class str - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main_
file_ struri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Mapping[str, str]
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py_
file_ Sequence[str]uris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - runtime_
version str - Runtime version. If not specified, the default runtime version is used.
- archive
Uris List<String> - Archive files to be extracted into the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- connection String
- Fully qualified name of the user-provided Spark connection object. Format:
"projects/{project_id}/locations/{location_id}/connections/{connection_id}"
- container
Image String - Custom container image for the runtime environment.
- file
Uris List<String> - Files to be placed in the working directory of each executor. For more information about Apache Spark, see Apache Spark.
- jar
Uris List<String> - JARs to include on the driver and executor CLASSPATH. For more information about Apache Spark, see Apache Spark.
- main
Class String - The fully qualified name of a class in jar_uris, for example, com.example.wordcount. Exactly one of main_class and main_jar_uri field should be set for Java/Scala language type.
- main
File StringUri - The main file/jar URI of the Spark application. Exactly one of the definition_body field and the main_file_uri field must be set for Python. Exactly one of main_class and main_file_uri field should be set for Java/Scala language type.
- properties Map<String>
- Configuration properties as a set of key/value pairs, which will be passed on to the Spark application. For more information, see Apache Spark and the procedure option list.
- py
File List<String>Uris - Python files to be placed on the PYTHONPATH for PySpark application. Supported file types:
.py
,.egg
, and.zip
. For more information about Apache Spark, see Apache Spark. - runtime
Version String - Runtime version. If not specified, the default runtime version is used.
StandardSqlDataType, StandardSqlDataTypeArgs
- Type
Kind Pulumi.Google Native. Big Query. V2. Standard Sql Data Type Type Kind - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- Array
Element Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type - The type of the array's elements, if type_kind = "ARRAY".
- Range
Element Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type - The type of the range's elements, if type_kind = "RANGE".
- Struct
Type Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Struct Type - The fields of this struct, in order, if type_kind = "STRUCT".
- Type
Kind StandardSql Data Type Type Kind - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- Array
Element StandardType Sql Data Type - The type of the array's elements, if type_kind = "ARRAY".
- Range
Element StandardType Sql Data Type - The type of the range's elements, if type_kind = "RANGE".
- Struct
Type StandardSql Struct Type - The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind StandardSql Data Type Type Kind - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element StandardType Sql Data Type - The type of the array's elements, if type_kind = "ARRAY".
- range
Element StandardType Sql Data Type - The type of the range's elements, if type_kind = "RANGE".
- struct
Type StandardSql Struct Type - The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind StandardSql Data Type Type Kind - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element StandardType Sql Data Type - The type of the array's elements, if type_kind = "ARRAY".
- range
Element StandardType Sql Data Type - The type of the range's elements, if type_kind = "RANGE".
- struct
Type StandardSql Struct Type - The fields of this struct, in order, if type_kind = "STRUCT".
- type_
kind StandardSql Data Type Type Kind - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array_
element_ Standardtype Sql Data Type - The type of the array's elements, if type_kind = "ARRAY".
- range_
element_ Standardtype Sql Data Type - The type of the range's elements, if type_kind = "RANGE".
- struct_
type StandardSql Struct Type - The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind "TYPE_KIND_UNSPECIFIED" | "INT64" | "BOOL" | "FLOAT64" | "STRING" | "BYTES" | "TIMESTAMP" | "DATE" | "TIME" | "DATETIME" | "INTERVAL" | "GEOGRAPHY" | "NUMERIC" | "BIGNUMERIC" | "JSON" | "ARRAY" | "STRUCT" | "RANGE" - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element Property MapType - The type of the array's elements, if type_kind = "ARRAY".
- range
Element Property MapType - The type of the range's elements, if type_kind = "RANGE".
- struct
Type Property Map - The fields of this struct, in order, if type_kind = "STRUCT".
StandardSqlDataTypeResponse, StandardSqlDataTypeResponseArgs
- Struct
Type Pulumi.Google Native. Big Query. V2. Inputs. Standard Sql Struct Type Response - The fields of this struct, in order, if type_kind = "STRUCT".
- Type
Kind string - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- Array
Element Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response - The type of the array's elements, if type_kind = "ARRAY".
- Range
Element Pulumi.Type Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response - The type of the range's elements, if type_kind = "RANGE".
- Struct
Type StandardSql Struct Type Response - The fields of this struct, in order, if type_kind = "STRUCT".
- Type
Kind string - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- Array
Element StandardType Sql Data Type Response - The type of the array's elements, if type_kind = "ARRAY".
- Range
Element StandardType Sql Data Type Response - The type of the range's elements, if type_kind = "RANGE".
- struct
Type StandardSql Struct Type Response - The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind String - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element StandardType Sql Data Type Response - The type of the array's elements, if type_kind = "ARRAY".
- range
Element StandardType Sql Data Type Response - The type of the range's elements, if type_kind = "RANGE".
- struct
Type StandardSql Struct Type Response - The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind string - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element StandardType Sql Data Type Response - The type of the array's elements, if type_kind = "ARRAY".
- range
Element StandardType Sql Data Type Response - The type of the range's elements, if type_kind = "RANGE".
- struct_
type StandardSql Struct Type Response - The fields of this struct, in order, if type_kind = "STRUCT".
- type_
kind str - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array_
element_ Standardtype Sql Data Type Response - The type of the array's elements, if type_kind = "ARRAY".
- range_
element_ Standardtype Sql Data Type Response - The type of the range's elements, if type_kind = "RANGE".
- struct
Type Property Map - The fields of this struct, in order, if type_kind = "STRUCT".
- type
Kind String - The top level type of this field. Can be any GoogleSQL data type (e.g., "INT64", "DATE", "ARRAY").
- array
Element Property MapType - The type of the array's elements, if type_kind = "ARRAY".
- range
Element Property MapType - The type of the range's elements, if type_kind = "RANGE".
StandardSqlDataTypeTypeKind, StandardSqlDataTypeTypeKindArgs
- Type
Kind Unspecified - TYPE_KIND_UNSPECIFIEDInvalid type.
- Int64
- INT64Encoded as a string in decimal format.
- Bool
- BOOLEncoded as a boolean "false" or "true".
- Float64
- FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- String
- STRINGEncoded as a string value.
- Bytes
- BYTESEncoded as a base64 string per RFC 4648, section 4.
- Timestamp
- TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- Date
- DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- Time
- TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- Datetime
- DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- Interval
- INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- Geography
- GEOGRAPHYEncoded as WKT
- Numeric
- NUMERICEncoded as a decimal string.
- Bignumeric
- BIGNUMERICEncoded as a decimal string.
- Json
- JSONEncoded as a string.
- Array
- ARRAYEncoded as a list with types matching Type.array_type.
- Struct
- STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- Range
- RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- Standard
Sql Data Type Type Kind Type Kind Unspecified - TYPE_KIND_UNSPECIFIEDInvalid type.
- Standard
Sql Data Type Type Kind Int64 - INT64Encoded as a string in decimal format.
- Standard
Sql Data Type Type Kind Bool - BOOLEncoded as a boolean "false" or "true".
- Standard
Sql Data Type Type Kind Float64 - FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- Standard
Sql Data Type Type Kind String - STRINGEncoded as a string value.
- Standard
Sql Data Type Type Kind Bytes - BYTESEncoded as a base64 string per RFC 4648, section 4.
- Standard
Sql Data Type Type Kind Timestamp - TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- Standard
Sql Data Type Type Kind Date - DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- Standard
Sql Data Type Type Kind Time - TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- Standard
Sql Data Type Type Kind Datetime - DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- Standard
Sql Data Type Type Kind Interval - INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- Standard
Sql Data Type Type Kind Geography - GEOGRAPHYEncoded as WKT
- Standard
Sql Data Type Type Kind Numeric - NUMERICEncoded as a decimal string.
- Standard
Sql Data Type Type Kind Bignumeric - BIGNUMERICEncoded as a decimal string.
- Standard
Sql Data Type Type Kind Json - JSONEncoded as a string.
- Standard
Sql Data Type Type Kind Array - ARRAYEncoded as a list with types matching Type.array_type.
- Standard
Sql Data Type Type Kind Struct - STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- Standard
Sql Data Type Type Kind Range - RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- Type
Kind Unspecified - TYPE_KIND_UNSPECIFIEDInvalid type.
- Int64
- INT64Encoded as a string in decimal format.
- Bool
- BOOLEncoded as a boolean "false" or "true".
- Float64
- FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- String
- STRINGEncoded as a string value.
- Bytes
- BYTESEncoded as a base64 string per RFC 4648, section 4.
- Timestamp
- TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- Date
- DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- Time
- TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- Datetime
- DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- Interval
- INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- Geography
- GEOGRAPHYEncoded as WKT
- Numeric
- NUMERICEncoded as a decimal string.
- Bignumeric
- BIGNUMERICEncoded as a decimal string.
- Json
- JSONEncoded as a string.
- Array
- ARRAYEncoded as a list with types matching Type.array_type.
- Struct
- STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- Range
- RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- Type
Kind Unspecified - TYPE_KIND_UNSPECIFIEDInvalid type.
- Int64
- INT64Encoded as a string in decimal format.
- Bool
- BOOLEncoded as a boolean "false" or "true".
- Float64
- FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- String
- STRINGEncoded as a string value.
- Bytes
- BYTESEncoded as a base64 string per RFC 4648, section 4.
- Timestamp
- TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- Date
- DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- Time
- TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- Datetime
- DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- Interval
- INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- Geography
- GEOGRAPHYEncoded as WKT
- Numeric
- NUMERICEncoded as a decimal string.
- Bignumeric
- BIGNUMERICEncoded as a decimal string.
- Json
- JSONEncoded as a string.
- Array
- ARRAYEncoded as a list with types matching Type.array_type.
- Struct
- STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- Range
- RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- TYPE_KIND_UNSPECIFIED
- TYPE_KIND_UNSPECIFIEDInvalid type.
- INT64
- INT64Encoded as a string in decimal format.
- BOOL
- BOOLEncoded as a boolean "false" or "true".
- FLOAT64
- FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- STRING
- STRINGEncoded as a string value.
- BYTES
- BYTESEncoded as a base64 string per RFC 4648, section 4.
- TIMESTAMP
- TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- DATE
- DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- TIME
- TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- DATETIME
- DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- INTERVAL
- INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- GEOGRAPHY
- GEOGRAPHYEncoded as WKT
- NUMERIC
- NUMERICEncoded as a decimal string.
- BIGNUMERIC
- BIGNUMERICEncoded as a decimal string.
- JSON
- JSONEncoded as a string.
- ARRAY
- ARRAYEncoded as a list with types matching Type.array_type.
- STRUCT
- STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- RANGE
- RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
- "TYPE_KIND_UNSPECIFIED"
- TYPE_KIND_UNSPECIFIEDInvalid type.
- "INT64"
- INT64Encoded as a string in decimal format.
- "BOOL"
- BOOLEncoded as a boolean "false" or "true".
- "FLOAT64"
- FLOAT64Encoded as a number, or string "NaN", "Infinity" or "-Infinity".
- "STRING"
- STRINGEncoded as a string value.
- "BYTES"
- BYTESEncoded as a base64 string per RFC 4648, section 4.
- "TIMESTAMP"
- TIMESTAMPEncoded as an RFC 3339 timestamp with mandatory "Z" time zone string: 1985-04-12T23:20:50.52Z
- "DATE"
- DATEEncoded as RFC 3339 full-date format string: 1985-04-12
- "TIME"
- TIMEEncoded as RFC 3339 partial-time format string: 23:20:50.52
- "DATETIME"
- DATETIMEEncoded as RFC 3339 full-date "T" partial-time: 1985-04-12T23:20:50.52
- "INTERVAL"
- INTERVALEncoded as fully qualified 3 part: 0-5 15 2:30:45.6
- "GEOGRAPHY"
- GEOGRAPHYEncoded as WKT
- "NUMERIC"
- NUMERICEncoded as a decimal string.
- "BIGNUMERIC"
- BIGNUMERICEncoded as a decimal string.
- "JSON"
- JSONEncoded as a string.
- "ARRAY"
- ARRAYEncoded as a list with types matching Type.array_type.
- "STRUCT"
- STRUCTEncoded as a list with fields of type Type.struct_type[i]. List is used because a JSON object cannot have duplicate field names.
- "RANGE"
- RANGEEncoded as a pair with types matching range_element_type. Pairs must begin with "[", end with ")", and be separated by ", ".
StandardSqlField, StandardSqlFieldArgs
- Name string
- Optional. The name of this field. Can be absent for struct fields.
- Type
Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Data Type - Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- Name string
- Optional. The name of this field. Can be absent for struct fields.
- Type
Standard
Sql Data Type - Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
- Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type - Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name string
- Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type - Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name str
- Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type - Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
- Optional. The name of this field. Can be absent for struct fields.
- type Property Map
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
StandardSqlFieldResponse, StandardSqlFieldResponseArgs
- Name string
- Optional. The name of this field. Can be absent for struct fields.
- Type
Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Data Type Response - Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- Name string
- Optional. The name of this field. Can be absent for struct fields.
- Type
Standard
Sql Data Type Response - Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
- Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type Response - Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name string
- Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type Response - Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name str
- Optional. The name of this field. Can be absent for struct fields.
- type
Standard
Sql Data Type Response - Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
- name String
- Optional. The name of this field. Can be absent for struct fields.
- type Property Map
- Optional. The type of this parameter. Absent if not explicitly specified (e.g., CREATE FUNCTION statement can omit the return type; in this case the output parameter does not have this "type" field).
StandardSqlStructType, StandardSqlStructTypeArgs
- Fields
List<Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Field> - Fields within the struct.
- Fields
[]Standard
Sql Field - Fields within the struct.
- fields
List<Standard
Sql Field> - Fields within the struct.
- fields
Standard
Sql Field[] - Fields within the struct.
- fields
Sequence[Standard
Sql Field] - Fields within the struct.
- fields List<Property Map>
- Fields within the struct.
StandardSqlStructTypeResponse, StandardSqlStructTypeResponseArgs
- Fields
List<Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Field Response> - Fields within the struct.
- Fields
[]Standard
Sql Field Response - Fields within the struct.
- fields
List<Standard
Sql Field Response> - Fields within the struct.
- fields
Standard
Sql Field Response[] - Fields within the struct.
- fields
Sequence[Standard
Sql Field Response] - Fields within the struct.
- fields List<Property Map>
- Fields within the struct.
StandardSqlTableType, StandardSqlTableTypeArgs
- Columns
List<Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Field> - The columns in this table type
- Columns
[]Standard
Sql Field - The columns in this table type
- columns
List<Standard
Sql Field> - The columns in this table type
- columns
Standard
Sql Field[] - The columns in this table type
- columns
Sequence[Standard
Sql Field] - The columns in this table type
- columns List<Property Map>
- The columns in this table type
StandardSqlTableTypeResponse, StandardSqlTableTypeResponseArgs
- Columns
List<Pulumi.
Google Native. Big Query. V2. Inputs. Standard Sql Field Response> - The columns in this table type
- Columns
[]Standard
Sql Field Response - The columns in this table type
- columns
List<Standard
Sql Field Response> - The columns in this table type
- columns
Standard
Sql Field Response[] - The columns in this table type
- columns
Sequence[Standard
Sql Field Response] - The columns in this table type
- columns List<Property Map>
- The columns in this table type
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.