Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataproc/v1beta2.WorkflowTemplate
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Creates new workflow template. Auto-naming is currently not supported for this resource.
Create WorkflowTemplate Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new WorkflowTemplate(name: string, args: WorkflowTemplateArgs, opts?: CustomResourceOptions);
@overload
def WorkflowTemplate(resource_name: str,
args: WorkflowTemplateArgs,
opts: Optional[ResourceOptions] = None)
@overload
def WorkflowTemplate(resource_name: str,
opts: Optional[ResourceOptions] = None,
id: Optional[str] = None,
jobs: Optional[Sequence[OrderedJobArgs]] = None,
placement: Optional[WorkflowTemplatePlacementArgs] = None,
dag_timeout: Optional[str] = None,
labels: Optional[Mapping[str, str]] = None,
location: Optional[str] = None,
parameters: Optional[Sequence[TemplateParameterArgs]] = None,
project: Optional[str] = None,
version: Optional[int] = None)
func NewWorkflowTemplate(ctx *Context, name string, args WorkflowTemplateArgs, opts ...ResourceOption) (*WorkflowTemplate, error)
public WorkflowTemplate(string name, WorkflowTemplateArgs args, CustomResourceOptions? opts = null)
public WorkflowTemplate(String name, WorkflowTemplateArgs args)
public WorkflowTemplate(String name, WorkflowTemplateArgs args, CustomResourceOptions options)
type: google-native:dataproc/v1beta2:WorkflowTemplate
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args WorkflowTemplateArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args WorkflowTemplateArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args WorkflowTemplateArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args WorkflowTemplateArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args WorkflowTemplateArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var google_nativeWorkflowTemplateResource = new GoogleNative.Dataproc.V1Beta2.WorkflowTemplate("google-nativeWorkflowTemplateResource", new()
{
Id = "string",
Jobs = new[]
{
new GoogleNative.Dataproc.V1Beta2.Inputs.OrderedJobArgs
{
StepId = "string",
HadoopJob = new GoogleNative.Dataproc.V1Beta2.Inputs.HadoopJobArgs
{
ArchiveUris = new[]
{
"string",
},
Args = new[]
{
"string",
},
FileUris = new[]
{
"string",
},
JarFileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
MainClass = "string",
MainJarFileUri = "string",
Properties =
{
{ "string", "string" },
},
},
HiveJob = new GoogleNative.Dataproc.V1Beta2.Inputs.HiveJobArgs
{
ContinueOnFailure = false,
JarFileUris = new[]
{
"string",
},
Properties =
{
{ "string", "string" },
},
QueryFileUri = "string",
QueryList = new GoogleNative.Dataproc.V1Beta2.Inputs.QueryListArgs
{
Queries = new[]
{
"string",
},
},
ScriptVariables =
{
{ "string", "string" },
},
},
Labels =
{
{ "string", "string" },
},
PigJob = new GoogleNative.Dataproc.V1Beta2.Inputs.PigJobArgs
{
ContinueOnFailure = false,
JarFileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
Properties =
{
{ "string", "string" },
},
QueryFileUri = "string",
QueryList = new GoogleNative.Dataproc.V1Beta2.Inputs.QueryListArgs
{
Queries = new[]
{
"string",
},
},
ScriptVariables =
{
{ "string", "string" },
},
},
PrerequisiteStepIds = new[]
{
"string",
},
PrestoJob = new GoogleNative.Dataproc.V1Beta2.Inputs.PrestoJobArgs
{
ClientTags = new[]
{
"string",
},
ContinueOnFailure = false,
LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
OutputFormat = "string",
Properties =
{
{ "string", "string" },
},
QueryFileUri = "string",
QueryList = new GoogleNative.Dataproc.V1Beta2.Inputs.QueryListArgs
{
Queries = new[]
{
"string",
},
},
},
PysparkJob = new GoogleNative.Dataproc.V1Beta2.Inputs.PySparkJobArgs
{
MainPythonFileUri = "string",
ArchiveUris = new[]
{
"string",
},
Args = new[]
{
"string",
},
FileUris = new[]
{
"string",
},
JarFileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
Properties =
{
{ "string", "string" },
},
PythonFileUris = new[]
{
"string",
},
},
Scheduling = new GoogleNative.Dataproc.V1Beta2.Inputs.JobSchedulingArgs
{
MaxFailuresPerHour = 0,
MaxFailuresTotal = 0,
},
SparkJob = new GoogleNative.Dataproc.V1Beta2.Inputs.SparkJobArgs
{
ArchiveUris = new[]
{
"string",
},
Args = new[]
{
"string",
},
FileUris = new[]
{
"string",
},
JarFileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
MainClass = "string",
MainJarFileUri = "string",
Properties =
{
{ "string", "string" },
},
},
SparkRJob = new GoogleNative.Dataproc.V1Beta2.Inputs.SparkRJobArgs
{
MainRFileUri = "string",
ArchiveUris = new[]
{
"string",
},
Args = new[]
{
"string",
},
FileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
Properties =
{
{ "string", "string" },
},
},
SparkSqlJob = new GoogleNative.Dataproc.V1Beta2.Inputs.SparkSqlJobArgs
{
JarFileUris = new[]
{
"string",
},
LoggingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LoggingConfigArgs
{
DriverLogLevels =
{
{ "string", "string" },
},
},
Properties =
{
{ "string", "string" },
},
QueryFileUri = "string",
QueryList = new GoogleNative.Dataproc.V1Beta2.Inputs.QueryListArgs
{
Queries = new[]
{
"string",
},
},
ScriptVariables =
{
{ "string", "string" },
},
},
},
},
Placement = new GoogleNative.Dataproc.V1Beta2.Inputs.WorkflowTemplatePlacementArgs
{
ClusterSelector = new GoogleNative.Dataproc.V1Beta2.Inputs.ClusterSelectorArgs
{
ClusterLabels =
{
{ "string", "string" },
},
Zone = "string",
},
ManagedCluster = new GoogleNative.Dataproc.V1Beta2.Inputs.ManagedClusterArgs
{
ClusterName = "string",
Config = new GoogleNative.Dataproc.V1Beta2.Inputs.ClusterConfigArgs
{
AutoscalingConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.AutoscalingConfigArgs
{
PolicyUri = "string",
},
ConfigBucket = "string",
EncryptionConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.EncryptionConfigArgs
{
GcePdKmsKeyName = "string",
},
EndpointConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.EndpointConfigArgs
{
EnableHttpPortAccess = false,
},
GceClusterConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.GceClusterConfigArgs
{
InternalIpOnly = false,
Metadata =
{
{ "string", "string" },
},
NetworkUri = "string",
NodeGroupAffinity = new GoogleNative.Dataproc.V1Beta2.Inputs.NodeGroupAffinityArgs
{
NodeGroupUri = "string",
},
PrivateIpv6GoogleAccess = GoogleNative.Dataproc.V1Beta2.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
ReservationAffinity = new GoogleNative.Dataproc.V1Beta2.Inputs.ReservationAffinityArgs
{
ConsumeReservationType = GoogleNative.Dataproc.V1Beta2.ReservationAffinityConsumeReservationType.TypeUnspecified,
Key = "string",
Values = new[]
{
"string",
},
},
ServiceAccount = "string",
ServiceAccountScopes = new[]
{
"string",
},
ShieldedInstanceConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.ShieldedInstanceConfigArgs
{
EnableIntegrityMonitoring = false,
EnableSecureBoot = false,
EnableVtpm = false,
},
SubnetworkUri = "string",
Tags = new[]
{
"string",
},
ZoneUri = "string",
},
GkeClusterConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.GkeClusterConfigArgs
{
NamespacedGkeDeploymentTarget = new GoogleNative.Dataproc.V1Beta2.Inputs.NamespacedGkeDeploymentTargetArgs
{
ClusterNamespace = "string",
TargetGkeCluster = "string",
},
},
InitializationActions = new[]
{
new GoogleNative.Dataproc.V1Beta2.Inputs.NodeInitializationActionArgs
{
ExecutableFile = "string",
ExecutionTimeout = "string",
},
},
LifecycleConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.LifecycleConfigArgs
{
AutoDeleteTime = "string",
AutoDeleteTtl = "string",
IdleDeleteTtl = "string",
},
MasterConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1Beta2.Inputs.AcceleratorConfigArgs
{
AcceleratorCount = 0,
AcceleratorTypeUri = "string",
},
},
DiskConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.DiskConfigArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
NumLocalSsds = 0,
},
ImageUri = "string",
MachineTypeUri = "string",
MinCpuPlatform = "string",
NumInstances = 0,
Preemptibility = GoogleNative.Dataproc.V1Beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
},
MetastoreConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.MetastoreConfigArgs
{
DataprocMetastoreService = "string",
},
SecondaryWorkerConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1Beta2.Inputs.AcceleratorConfigArgs
{
AcceleratorCount = 0,
AcceleratorTypeUri = "string",
},
},
DiskConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.DiskConfigArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
NumLocalSsds = 0,
},
ImageUri = "string",
MachineTypeUri = "string",
MinCpuPlatform = "string",
NumInstances = 0,
Preemptibility = GoogleNative.Dataproc.V1Beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
},
SecurityConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.SecurityConfigArgs
{
KerberosConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.KerberosConfigArgs
{
CrossRealmTrustAdminServer = "string",
CrossRealmTrustKdc = "string",
CrossRealmTrustRealm = "string",
CrossRealmTrustSharedPasswordUri = "string",
EnableKerberos = false,
KdcDbKeyUri = "string",
KeyPasswordUri = "string",
KeystorePasswordUri = "string",
KeystoreUri = "string",
KmsKeyUri = "string",
Realm = "string",
RootPrincipalPasswordUri = "string",
TgtLifetimeHours = 0,
TruststorePasswordUri = "string",
TruststoreUri = "string",
},
},
SoftwareConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.SoftwareConfigArgs
{
ImageVersion = "string",
OptionalComponents = new[]
{
GoogleNative.Dataproc.V1Beta2.SoftwareConfigOptionalComponentsItem.ComponentUnspecified,
},
Properties =
{
{ "string", "string" },
},
},
TempBucket = "string",
WorkerConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.InstanceGroupConfigArgs
{
Accelerators = new[]
{
new GoogleNative.Dataproc.V1Beta2.Inputs.AcceleratorConfigArgs
{
AcceleratorCount = 0,
AcceleratorTypeUri = "string",
},
},
DiskConfig = new GoogleNative.Dataproc.V1Beta2.Inputs.DiskConfigArgs
{
BootDiskSizeGb = 0,
BootDiskType = "string",
NumLocalSsds = 0,
},
ImageUri = "string",
MachineTypeUri = "string",
MinCpuPlatform = "string",
NumInstances = 0,
Preemptibility = GoogleNative.Dataproc.V1Beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
},
},
Labels =
{
{ "string", "string" },
},
},
},
DagTimeout = "string",
Labels =
{
{ "string", "string" },
},
Location = "string",
Parameters = new[]
{
new GoogleNative.Dataproc.V1Beta2.Inputs.TemplateParameterArgs
{
Fields = new[]
{
"string",
},
Name = "string",
Description = "string",
Validation = new GoogleNative.Dataproc.V1Beta2.Inputs.ParameterValidationArgs
{
Regex = new GoogleNative.Dataproc.V1Beta2.Inputs.RegexValidationArgs
{
Regexes = new[]
{
"string",
},
},
Values = new GoogleNative.Dataproc.V1Beta2.Inputs.ValueValidationArgs
{
Values = new[]
{
"string",
},
},
},
},
},
Project = "string",
Version = 0,
});
example, err := dataprocv1beta2.NewWorkflowTemplate(ctx, "google-nativeWorkflowTemplateResource", &dataprocv1beta2.WorkflowTemplateArgs{
Id: pulumi.String("string"),
Jobs: dataproc.OrderedJobArray{
&dataproc.OrderedJobArgs{
StepId: pulumi.String("string"),
HadoopJob: &dataproc.HadoopJobArgs{
ArchiveUris: pulumi.StringArray{
pulumi.String("string"),
},
Args: pulumi.StringArray{
pulumi.String("string"),
},
FileUris: pulumi.StringArray{
pulumi.String("string"),
},
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
MainClass: pulumi.String("string"),
MainJarFileUri: pulumi.String("string"),
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
HiveJob: &dataproc.HiveJobArgs{
ContinueOnFailure: pulumi.Bool(false),
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
QueryFileUri: pulumi.String("string"),
QueryList: &dataproc.QueryListArgs{
Queries: pulumi.StringArray{
pulumi.String("string"),
},
},
ScriptVariables: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
PigJob: &dataproc.PigJobArgs{
ContinueOnFailure: pulumi.Bool(false),
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
QueryFileUri: pulumi.String("string"),
QueryList: &dataproc.QueryListArgs{
Queries: pulumi.StringArray{
pulumi.String("string"),
},
},
ScriptVariables: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
PrerequisiteStepIds: pulumi.StringArray{
pulumi.String("string"),
},
PrestoJob: &dataproc.PrestoJobArgs{
ClientTags: pulumi.StringArray{
pulumi.String("string"),
},
ContinueOnFailure: pulumi.Bool(false),
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
OutputFormat: pulumi.String("string"),
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
QueryFileUri: pulumi.String("string"),
QueryList: &dataproc.QueryListArgs{
Queries: pulumi.StringArray{
pulumi.String("string"),
},
},
},
PysparkJob: &dataproc.PySparkJobArgs{
MainPythonFileUri: pulumi.String("string"),
ArchiveUris: pulumi.StringArray{
pulumi.String("string"),
},
Args: pulumi.StringArray{
pulumi.String("string"),
},
FileUris: pulumi.StringArray{
pulumi.String("string"),
},
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
PythonFileUris: pulumi.StringArray{
pulumi.String("string"),
},
},
Scheduling: &dataproc.JobSchedulingArgs{
MaxFailuresPerHour: pulumi.Int(0),
MaxFailuresTotal: pulumi.Int(0),
},
SparkJob: &dataproc.SparkJobArgs{
ArchiveUris: pulumi.StringArray{
pulumi.String("string"),
},
Args: pulumi.StringArray{
pulumi.String("string"),
},
FileUris: pulumi.StringArray{
pulumi.String("string"),
},
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
MainClass: pulumi.String("string"),
MainJarFileUri: pulumi.String("string"),
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
SparkRJob: &dataproc.SparkRJobArgs{
MainRFileUri: pulumi.String("string"),
ArchiveUris: pulumi.StringArray{
pulumi.String("string"),
},
Args: pulumi.StringArray{
pulumi.String("string"),
},
FileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
SparkSqlJob: &dataproc.SparkSqlJobArgs{
JarFileUris: pulumi.StringArray{
pulumi.String("string"),
},
LoggingConfig: &dataproc.LoggingConfigArgs{
DriverLogLevels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
QueryFileUri: pulumi.String("string"),
QueryList: &dataproc.QueryListArgs{
Queries: pulumi.StringArray{
pulumi.String("string"),
},
},
ScriptVariables: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
},
},
Placement: &dataproc.WorkflowTemplatePlacementArgs{
ClusterSelector: &dataproc.ClusterSelectorArgs{
ClusterLabels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Zone: pulumi.String("string"),
},
ManagedCluster: &dataproc.ManagedClusterArgs{
ClusterName: pulumi.String("string"),
Config: &dataproc.ClusterConfigArgs{
AutoscalingConfig: &dataproc.AutoscalingConfigArgs{
PolicyUri: pulumi.String("string"),
},
ConfigBucket: pulumi.String("string"),
EncryptionConfig: &dataproc.EncryptionConfigArgs{
GcePdKmsKeyName: pulumi.String("string"),
},
EndpointConfig: &dataproc.EndpointConfigArgs{
EnableHttpPortAccess: pulumi.Bool(false),
},
GceClusterConfig: &dataproc.GceClusterConfigArgs{
InternalIpOnly: pulumi.Bool(false),
Metadata: pulumi.StringMap{
"string": pulumi.String("string"),
},
NetworkUri: pulumi.String("string"),
NodeGroupAffinity: &dataproc.NodeGroupAffinityArgs{
NodeGroupUri: pulumi.String("string"),
},
PrivateIpv6GoogleAccess: dataprocv1beta2.GceClusterConfigPrivateIpv6GoogleAccessPrivateIpv6GoogleAccessUnspecified,
ReservationAffinity: &dataproc.ReservationAffinityArgs{
ConsumeReservationType: dataprocv1beta2.ReservationAffinityConsumeReservationTypeTypeUnspecified,
Key: pulumi.String("string"),
Values: pulumi.StringArray{
pulumi.String("string"),
},
},
ServiceAccount: pulumi.String("string"),
ServiceAccountScopes: pulumi.StringArray{
pulumi.String("string"),
},
ShieldedInstanceConfig: &dataproc.ShieldedInstanceConfigArgs{
EnableIntegrityMonitoring: pulumi.Bool(false),
EnableSecureBoot: pulumi.Bool(false),
EnableVtpm: pulumi.Bool(false),
},
SubnetworkUri: pulumi.String("string"),
Tags: pulumi.StringArray{
pulumi.String("string"),
},
ZoneUri: pulumi.String("string"),
},
GkeClusterConfig: &dataproc.GkeClusterConfigArgs{
NamespacedGkeDeploymentTarget: &dataproc.NamespacedGkeDeploymentTargetArgs{
ClusterNamespace: pulumi.String("string"),
TargetGkeCluster: pulumi.String("string"),
},
},
InitializationActions: dataproc.NodeInitializationActionArray{
&dataproc.NodeInitializationActionArgs{
ExecutableFile: pulumi.String("string"),
ExecutionTimeout: pulumi.String("string"),
},
},
LifecycleConfig: &dataproc.LifecycleConfigArgs{
AutoDeleteTime: pulumi.String("string"),
AutoDeleteTtl: pulumi.String("string"),
IdleDeleteTtl: pulumi.String("string"),
},
MasterConfig: &dataproc.InstanceGroupConfigArgs{
Accelerators: dataproc.AcceleratorConfigArray{
&dataproc.AcceleratorConfigArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorTypeUri: pulumi.String("string"),
},
},
DiskConfig: &dataproc.DiskConfigArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
NumLocalSsds: pulumi.Int(0),
},
ImageUri: pulumi.String("string"),
MachineTypeUri: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
NumInstances: pulumi.Int(0),
Preemptibility: dataprocv1beta2.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
},
MetastoreConfig: &dataproc.MetastoreConfigArgs{
DataprocMetastoreService: pulumi.String("string"),
},
SecondaryWorkerConfig: &dataproc.InstanceGroupConfigArgs{
Accelerators: dataproc.AcceleratorConfigArray{
&dataproc.AcceleratorConfigArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorTypeUri: pulumi.String("string"),
},
},
DiskConfig: &dataproc.DiskConfigArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
NumLocalSsds: pulumi.Int(0),
},
ImageUri: pulumi.String("string"),
MachineTypeUri: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
NumInstances: pulumi.Int(0),
Preemptibility: dataprocv1beta2.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
},
SecurityConfig: &dataproc.SecurityConfigArgs{
KerberosConfig: &dataproc.KerberosConfigArgs{
CrossRealmTrustAdminServer: pulumi.String("string"),
CrossRealmTrustKdc: pulumi.String("string"),
CrossRealmTrustRealm: pulumi.String("string"),
CrossRealmTrustSharedPasswordUri: pulumi.String("string"),
EnableKerberos: pulumi.Bool(false),
KdcDbKeyUri: pulumi.String("string"),
KeyPasswordUri: pulumi.String("string"),
KeystorePasswordUri: pulumi.String("string"),
KeystoreUri: pulumi.String("string"),
KmsKeyUri: pulumi.String("string"),
Realm: pulumi.String("string"),
RootPrincipalPasswordUri: pulumi.String("string"),
TgtLifetimeHours: pulumi.Int(0),
TruststorePasswordUri: pulumi.String("string"),
TruststoreUri: pulumi.String("string"),
},
},
SoftwareConfig: &dataproc.SoftwareConfigArgs{
ImageVersion: pulumi.String("string"),
OptionalComponents: dataproc.SoftwareConfigOptionalComponentsItemArray{
dataprocv1beta2.SoftwareConfigOptionalComponentsItemComponentUnspecified,
},
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
TempBucket: pulumi.String("string"),
WorkerConfig: &dataproc.InstanceGroupConfigArgs{
Accelerators: dataproc.AcceleratorConfigArray{
&dataproc.AcceleratorConfigArgs{
AcceleratorCount: pulumi.Int(0),
AcceleratorTypeUri: pulumi.String("string"),
},
},
DiskConfig: &dataproc.DiskConfigArgs{
BootDiskSizeGb: pulumi.Int(0),
BootDiskType: pulumi.String("string"),
NumLocalSsds: pulumi.Int(0),
},
ImageUri: pulumi.String("string"),
MachineTypeUri: pulumi.String("string"),
MinCpuPlatform: pulumi.String("string"),
NumInstances: pulumi.Int(0),
Preemptibility: dataprocv1beta2.InstanceGroupConfigPreemptibilityPreemptibilityUnspecified,
},
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
},
},
DagTimeout: pulumi.String("string"),
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Location: pulumi.String("string"),
Parameters: dataproc.TemplateParameterArray{
&dataproc.TemplateParameterArgs{
Fields: pulumi.StringArray{
pulumi.String("string"),
},
Name: pulumi.String("string"),
Description: pulumi.String("string"),
Validation: &dataproc.ParameterValidationArgs{
Regex: &dataproc.RegexValidationArgs{
Regexes: pulumi.StringArray{
pulumi.String("string"),
},
},
Values: &dataproc.ValueValidationArgs{
Values: pulumi.StringArray{
pulumi.String("string"),
},
},
},
},
},
Project: pulumi.String("string"),
Version: pulumi.Int(0),
})
var google_nativeWorkflowTemplateResource = new WorkflowTemplate("google-nativeWorkflowTemplateResource", WorkflowTemplateArgs.builder()
.id("string")
.jobs(OrderedJobArgs.builder()
.stepId("string")
.hadoopJob(HadoopJobArgs.builder()
.archiveUris("string")
.args("string")
.fileUris("string")
.jarFileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.mainClass("string")
.mainJarFileUri("string")
.properties(Map.of("string", "string"))
.build())
.hiveJob(HiveJobArgs.builder()
.continueOnFailure(false)
.jarFileUris("string")
.properties(Map.of("string", "string"))
.queryFileUri("string")
.queryList(QueryListArgs.builder()
.queries("string")
.build())
.scriptVariables(Map.of("string", "string"))
.build())
.labels(Map.of("string", "string"))
.pigJob(PigJobArgs.builder()
.continueOnFailure(false)
.jarFileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.properties(Map.of("string", "string"))
.queryFileUri("string")
.queryList(QueryListArgs.builder()
.queries("string")
.build())
.scriptVariables(Map.of("string", "string"))
.build())
.prerequisiteStepIds("string")
.prestoJob(PrestoJobArgs.builder()
.clientTags("string")
.continueOnFailure(false)
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.outputFormat("string")
.properties(Map.of("string", "string"))
.queryFileUri("string")
.queryList(QueryListArgs.builder()
.queries("string")
.build())
.build())
.pysparkJob(PySparkJobArgs.builder()
.mainPythonFileUri("string")
.archiveUris("string")
.args("string")
.fileUris("string")
.jarFileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.properties(Map.of("string", "string"))
.pythonFileUris("string")
.build())
.scheduling(JobSchedulingArgs.builder()
.maxFailuresPerHour(0)
.maxFailuresTotal(0)
.build())
.sparkJob(SparkJobArgs.builder()
.archiveUris("string")
.args("string")
.fileUris("string")
.jarFileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.mainClass("string")
.mainJarFileUri("string")
.properties(Map.of("string", "string"))
.build())
.sparkRJob(SparkRJobArgs.builder()
.mainRFileUri("string")
.archiveUris("string")
.args("string")
.fileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.properties(Map.of("string", "string"))
.build())
.sparkSqlJob(SparkSqlJobArgs.builder()
.jarFileUris("string")
.loggingConfig(LoggingConfigArgs.builder()
.driverLogLevels(Map.of("string", "string"))
.build())
.properties(Map.of("string", "string"))
.queryFileUri("string")
.queryList(QueryListArgs.builder()
.queries("string")
.build())
.scriptVariables(Map.of("string", "string"))
.build())
.build())
.placement(WorkflowTemplatePlacementArgs.builder()
.clusterSelector(ClusterSelectorArgs.builder()
.clusterLabels(Map.of("string", "string"))
.zone("string")
.build())
.managedCluster(ManagedClusterArgs.builder()
.clusterName("string")
.config(ClusterConfigArgs.builder()
.autoscalingConfig(AutoscalingConfigArgs.builder()
.policyUri("string")
.build())
.configBucket("string")
.encryptionConfig(EncryptionConfigArgs.builder()
.gcePdKmsKeyName("string")
.build())
.endpointConfig(EndpointConfigArgs.builder()
.enableHttpPortAccess(false)
.build())
.gceClusterConfig(GceClusterConfigArgs.builder()
.internalIpOnly(false)
.metadata(Map.of("string", "string"))
.networkUri("string")
.nodeGroupAffinity(NodeGroupAffinityArgs.builder()
.nodeGroupUri("string")
.build())
.privateIpv6GoogleAccess("PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED")
.reservationAffinity(ReservationAffinityArgs.builder()
.consumeReservationType("TYPE_UNSPECIFIED")
.key("string")
.values("string")
.build())
.serviceAccount("string")
.serviceAccountScopes("string")
.shieldedInstanceConfig(ShieldedInstanceConfigArgs.builder()
.enableIntegrityMonitoring(false)
.enableSecureBoot(false)
.enableVtpm(false)
.build())
.subnetworkUri("string")
.tags("string")
.zoneUri("string")
.build())
.gkeClusterConfig(GkeClusterConfigArgs.builder()
.namespacedGkeDeploymentTarget(NamespacedGkeDeploymentTargetArgs.builder()
.clusterNamespace("string")
.targetGkeCluster("string")
.build())
.build())
.initializationActions(NodeInitializationActionArgs.builder()
.executableFile("string")
.executionTimeout("string")
.build())
.lifecycleConfig(LifecycleConfigArgs.builder()
.autoDeleteTime("string")
.autoDeleteTtl("string")
.idleDeleteTtl("string")
.build())
.masterConfig(InstanceGroupConfigArgs.builder()
.accelerators(AcceleratorConfigArgs.builder()
.acceleratorCount(0)
.acceleratorTypeUri("string")
.build())
.diskConfig(DiskConfigArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.numLocalSsds(0)
.build())
.imageUri("string")
.machineTypeUri("string")
.minCpuPlatform("string")
.numInstances(0)
.preemptibility("PREEMPTIBILITY_UNSPECIFIED")
.build())
.metastoreConfig(MetastoreConfigArgs.builder()
.dataprocMetastoreService("string")
.build())
.secondaryWorkerConfig(InstanceGroupConfigArgs.builder()
.accelerators(AcceleratorConfigArgs.builder()
.acceleratorCount(0)
.acceleratorTypeUri("string")
.build())
.diskConfig(DiskConfigArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.numLocalSsds(0)
.build())
.imageUri("string")
.machineTypeUri("string")
.minCpuPlatform("string")
.numInstances(0)
.preemptibility("PREEMPTIBILITY_UNSPECIFIED")
.build())
.securityConfig(SecurityConfigArgs.builder()
.kerberosConfig(KerberosConfigArgs.builder()
.crossRealmTrustAdminServer("string")
.crossRealmTrustKdc("string")
.crossRealmTrustRealm("string")
.crossRealmTrustSharedPasswordUri("string")
.enableKerberos(false)
.kdcDbKeyUri("string")
.keyPasswordUri("string")
.keystorePasswordUri("string")
.keystoreUri("string")
.kmsKeyUri("string")
.realm("string")
.rootPrincipalPasswordUri("string")
.tgtLifetimeHours(0)
.truststorePasswordUri("string")
.truststoreUri("string")
.build())
.build())
.softwareConfig(SoftwareConfigArgs.builder()
.imageVersion("string")
.optionalComponents("COMPONENT_UNSPECIFIED")
.properties(Map.of("string", "string"))
.build())
.tempBucket("string")
.workerConfig(InstanceGroupConfigArgs.builder()
.accelerators(AcceleratorConfigArgs.builder()
.acceleratorCount(0)
.acceleratorTypeUri("string")
.build())
.diskConfig(DiskConfigArgs.builder()
.bootDiskSizeGb(0)
.bootDiskType("string")
.numLocalSsds(0)
.build())
.imageUri("string")
.machineTypeUri("string")
.minCpuPlatform("string")
.numInstances(0)
.preemptibility("PREEMPTIBILITY_UNSPECIFIED")
.build())
.build())
.labels(Map.of("string", "string"))
.build())
.build())
.dagTimeout("string")
.labels(Map.of("string", "string"))
.location("string")
.parameters(TemplateParameterArgs.builder()
.fields("string")
.name("string")
.description("string")
.validation(ParameterValidationArgs.builder()
.regex(RegexValidationArgs.builder()
.regexes("string")
.build())
.values(ValueValidationArgs.builder()
.values("string")
.build())
.build())
.build())
.project("string")
.version(0)
.build());
google_native_workflow_template_resource = google_native.dataproc.v1beta2.WorkflowTemplate("google-nativeWorkflowTemplateResource",
id="string",
jobs=[google_native.dataproc.v1beta2.OrderedJobArgs(
step_id="string",
hadoop_job=google_native.dataproc.v1beta2.HadoopJobArgs(
archive_uris=["string"],
args=["string"],
file_uris=["string"],
jar_file_uris=["string"],
logging_config=google_native.dataproc.v1beta2.LoggingConfigArgs(
driver_log_levels={
"string": "string",
},
),
main_class="string",
main_jar_file_uri="string",
properties={
"string": "string",
},
),
hive_job=google_native.dataproc.v1beta2.HiveJobArgs(
continue_on_failure=False,
jar_file_uris=["string"],
properties={
"string": "string",
},
query_file_uri="string",
query_list=google_native.dataproc.v1beta2.QueryListArgs(
queries=["string"],
),
script_variables={
"string": "string",
},
),
labels={
"string": "string",
},
pig_job=google_native.dataproc.v1beta2.PigJobArgs(
continue_on_failure=False,
jar_file_uris=["string"],
logging_config=google_native.dataproc.v1beta2.LoggingConfigArgs(
driver_log_levels={
"string": "string",
},
),
properties={
"string": "string",
},
query_file_uri="string",
query_list=google_native.dataproc.v1beta2.QueryListArgs(
queries=["string"],
),
script_variables={
"string": "string",
},
),
prerequisite_step_ids=["string"],
presto_job=google_native.dataproc.v1beta2.PrestoJobArgs(
client_tags=["string"],
continue_on_failure=False,
logging_config=google_native.dataproc.v1beta2.LoggingConfigArgs(
driver_log_levels={
"string": "string",
},
),
output_format="string",
properties={
"string": "string",
},
query_file_uri="string",
query_list=google_native.dataproc.v1beta2.QueryListArgs(
queries=["string"],
),
),
pyspark_job=google_native.dataproc.v1beta2.PySparkJobArgs(
main_python_file_uri="string",
archive_uris=["string"],
args=["string"],
file_uris=["string"],
jar_file_uris=["string"],
logging_config=google_native.dataproc.v1beta2.LoggingConfigArgs(
driver_log_levels={
"string": "string",
},
),
properties={
"string": "string",
},
python_file_uris=["string"],
),
scheduling=google_native.dataproc.v1beta2.JobSchedulingArgs(
max_failures_per_hour=0,
max_failures_total=0,
),
spark_job=google_native.dataproc.v1beta2.SparkJobArgs(
archive_uris=["string"],
args=["string"],
file_uris=["string"],
jar_file_uris=["string"],
logging_config=google_native.dataproc.v1beta2.LoggingConfigArgs(
driver_log_levels={
"string": "string",
},
),
main_class="string",
main_jar_file_uri="string",
properties={
"string": "string",
},
),
spark_r_job=google_native.dataproc.v1beta2.SparkRJobArgs(
main_r_file_uri="string",
archive_uris=["string"],
args=["string"],
file_uris=["string"],
logging_config=google_native.dataproc.v1beta2.LoggingConfigArgs(
driver_log_levels={
"string": "string",
},
),
properties={
"string": "string",
},
),
spark_sql_job=google_native.dataproc.v1beta2.SparkSqlJobArgs(
jar_file_uris=["string"],
logging_config=google_native.dataproc.v1beta2.LoggingConfigArgs(
driver_log_levels={
"string": "string",
},
),
properties={
"string": "string",
},
query_file_uri="string",
query_list=google_native.dataproc.v1beta2.QueryListArgs(
queries=["string"],
),
script_variables={
"string": "string",
},
),
)],
placement=google_native.dataproc.v1beta2.WorkflowTemplatePlacementArgs(
cluster_selector=google_native.dataproc.v1beta2.ClusterSelectorArgs(
cluster_labels={
"string": "string",
},
zone="string",
),
managed_cluster=google_native.dataproc.v1beta2.ManagedClusterArgs(
cluster_name="string",
config=google_native.dataproc.v1beta2.ClusterConfigArgs(
autoscaling_config=google_native.dataproc.v1beta2.AutoscalingConfigArgs(
policy_uri="string",
),
config_bucket="string",
encryption_config=google_native.dataproc.v1beta2.EncryptionConfigArgs(
gce_pd_kms_key_name="string",
),
endpoint_config=google_native.dataproc.v1beta2.EndpointConfigArgs(
enable_http_port_access=False,
),
gce_cluster_config=google_native.dataproc.v1beta2.GceClusterConfigArgs(
internal_ip_only=False,
metadata={
"string": "string",
},
network_uri="string",
node_group_affinity=google_native.dataproc.v1beta2.NodeGroupAffinityArgs(
node_group_uri="string",
),
private_ipv6_google_access=google_native.dataproc.v1beta2.GceClusterConfigPrivateIpv6GoogleAccess.PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED,
reservation_affinity=google_native.dataproc.v1beta2.ReservationAffinityArgs(
consume_reservation_type=google_native.dataproc.v1beta2.ReservationAffinityConsumeReservationType.TYPE_UNSPECIFIED,
key="string",
values=["string"],
),
service_account="string",
service_account_scopes=["string"],
shielded_instance_config=google_native.dataproc.v1beta2.ShieldedInstanceConfigArgs(
enable_integrity_monitoring=False,
enable_secure_boot=False,
enable_vtpm=False,
),
subnetwork_uri="string",
tags=["string"],
zone_uri="string",
),
gke_cluster_config=google_native.dataproc.v1beta2.GkeClusterConfigArgs(
namespaced_gke_deployment_target=google_native.dataproc.v1beta2.NamespacedGkeDeploymentTargetArgs(
cluster_namespace="string",
target_gke_cluster="string",
),
),
initialization_actions=[google_native.dataproc.v1beta2.NodeInitializationActionArgs(
executable_file="string",
execution_timeout="string",
)],
lifecycle_config=google_native.dataproc.v1beta2.LifecycleConfigArgs(
auto_delete_time="string",
auto_delete_ttl="string",
idle_delete_ttl="string",
),
master_config=google_native.dataproc.v1beta2.InstanceGroupConfigArgs(
accelerators=[google_native.dataproc.v1beta2.AcceleratorConfigArgs(
accelerator_count=0,
accelerator_type_uri="string",
)],
disk_config=google_native.dataproc.v1beta2.DiskConfigArgs(
boot_disk_size_gb=0,
boot_disk_type="string",
num_local_ssds=0,
),
image_uri="string",
machine_type_uri="string",
min_cpu_platform="string",
num_instances=0,
preemptibility=google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
),
metastore_config=google_native.dataproc.v1beta2.MetastoreConfigArgs(
dataproc_metastore_service="string",
),
secondary_worker_config=google_native.dataproc.v1beta2.InstanceGroupConfigArgs(
accelerators=[google_native.dataproc.v1beta2.AcceleratorConfigArgs(
accelerator_count=0,
accelerator_type_uri="string",
)],
disk_config=google_native.dataproc.v1beta2.DiskConfigArgs(
boot_disk_size_gb=0,
boot_disk_type="string",
num_local_ssds=0,
),
image_uri="string",
machine_type_uri="string",
min_cpu_platform="string",
num_instances=0,
preemptibility=google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
),
security_config=google_native.dataproc.v1beta2.SecurityConfigArgs(
kerberos_config=google_native.dataproc.v1beta2.KerberosConfigArgs(
cross_realm_trust_admin_server="string",
cross_realm_trust_kdc="string",
cross_realm_trust_realm="string",
cross_realm_trust_shared_password_uri="string",
enable_kerberos=False,
kdc_db_key_uri="string",
key_password_uri="string",
keystore_password_uri="string",
keystore_uri="string",
kms_key_uri="string",
realm="string",
root_principal_password_uri="string",
tgt_lifetime_hours=0,
truststore_password_uri="string",
truststore_uri="string",
),
),
software_config=google_native.dataproc.v1beta2.SoftwareConfigArgs(
image_version="string",
optional_components=[google_native.dataproc.v1beta2.SoftwareConfigOptionalComponentsItem.COMPONENT_UNSPECIFIED],
properties={
"string": "string",
},
),
temp_bucket="string",
worker_config=google_native.dataproc.v1beta2.InstanceGroupConfigArgs(
accelerators=[google_native.dataproc.v1beta2.AcceleratorConfigArgs(
accelerator_count=0,
accelerator_type_uri="string",
)],
disk_config=google_native.dataproc.v1beta2.DiskConfigArgs(
boot_disk_size_gb=0,
boot_disk_type="string",
num_local_ssds=0,
),
image_uri="string",
machine_type_uri="string",
min_cpu_platform="string",
num_instances=0,
preemptibility=google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PREEMPTIBILITY_UNSPECIFIED,
),
),
labels={
"string": "string",
},
),
),
dag_timeout="string",
labels={
"string": "string",
},
location="string",
parameters=[google_native.dataproc.v1beta2.TemplateParameterArgs(
fields=["string"],
name="string",
description="string",
validation=google_native.dataproc.v1beta2.ParameterValidationArgs(
regex=google_native.dataproc.v1beta2.RegexValidationArgs(
regexes=["string"],
),
values=google_native.dataproc.v1beta2.ValueValidationArgs(
values=["string"],
),
),
)],
project="string",
version=0)
const google_nativeWorkflowTemplateResource = new google_native.dataproc.v1beta2.WorkflowTemplate("google-nativeWorkflowTemplateResource", {
id: "string",
jobs: [{
stepId: "string",
hadoopJob: {
archiveUris: ["string"],
args: ["string"],
fileUris: ["string"],
jarFileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
mainClass: "string",
mainJarFileUri: "string",
properties: {
string: "string",
},
},
hiveJob: {
continueOnFailure: false,
jarFileUris: ["string"],
properties: {
string: "string",
},
queryFileUri: "string",
queryList: {
queries: ["string"],
},
scriptVariables: {
string: "string",
},
},
labels: {
string: "string",
},
pigJob: {
continueOnFailure: false,
jarFileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
properties: {
string: "string",
},
queryFileUri: "string",
queryList: {
queries: ["string"],
},
scriptVariables: {
string: "string",
},
},
prerequisiteStepIds: ["string"],
prestoJob: {
clientTags: ["string"],
continueOnFailure: false,
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
outputFormat: "string",
properties: {
string: "string",
},
queryFileUri: "string",
queryList: {
queries: ["string"],
},
},
pysparkJob: {
mainPythonFileUri: "string",
archiveUris: ["string"],
args: ["string"],
fileUris: ["string"],
jarFileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
properties: {
string: "string",
},
pythonFileUris: ["string"],
},
scheduling: {
maxFailuresPerHour: 0,
maxFailuresTotal: 0,
},
sparkJob: {
archiveUris: ["string"],
args: ["string"],
fileUris: ["string"],
jarFileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
mainClass: "string",
mainJarFileUri: "string",
properties: {
string: "string",
},
},
sparkRJob: {
mainRFileUri: "string",
archiveUris: ["string"],
args: ["string"],
fileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
properties: {
string: "string",
},
},
sparkSqlJob: {
jarFileUris: ["string"],
loggingConfig: {
driverLogLevels: {
string: "string",
},
},
properties: {
string: "string",
},
queryFileUri: "string",
queryList: {
queries: ["string"],
},
scriptVariables: {
string: "string",
},
},
}],
placement: {
clusterSelector: {
clusterLabels: {
string: "string",
},
zone: "string",
},
managedCluster: {
clusterName: "string",
config: {
autoscalingConfig: {
policyUri: "string",
},
configBucket: "string",
encryptionConfig: {
gcePdKmsKeyName: "string",
},
endpointConfig: {
enableHttpPortAccess: false,
},
gceClusterConfig: {
internalIpOnly: false,
metadata: {
string: "string",
},
networkUri: "string",
nodeGroupAffinity: {
nodeGroupUri: "string",
},
privateIpv6GoogleAccess: google_native.dataproc.v1beta2.GceClusterConfigPrivateIpv6GoogleAccess.PrivateIpv6GoogleAccessUnspecified,
reservationAffinity: {
consumeReservationType: google_native.dataproc.v1beta2.ReservationAffinityConsumeReservationType.TypeUnspecified,
key: "string",
values: ["string"],
},
serviceAccount: "string",
serviceAccountScopes: ["string"],
shieldedInstanceConfig: {
enableIntegrityMonitoring: false,
enableSecureBoot: false,
enableVtpm: false,
},
subnetworkUri: "string",
tags: ["string"],
zoneUri: "string",
},
gkeClusterConfig: {
namespacedGkeDeploymentTarget: {
clusterNamespace: "string",
targetGkeCluster: "string",
},
},
initializationActions: [{
executableFile: "string",
executionTimeout: "string",
}],
lifecycleConfig: {
autoDeleteTime: "string",
autoDeleteTtl: "string",
idleDeleteTtl: "string",
},
masterConfig: {
accelerators: [{
acceleratorCount: 0,
acceleratorTypeUri: "string",
}],
diskConfig: {
bootDiskSizeGb: 0,
bootDiskType: "string",
numLocalSsds: 0,
},
imageUri: "string",
machineTypeUri: "string",
minCpuPlatform: "string",
numInstances: 0,
preemptibility: google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
},
metastoreConfig: {
dataprocMetastoreService: "string",
},
secondaryWorkerConfig: {
accelerators: [{
acceleratorCount: 0,
acceleratorTypeUri: "string",
}],
diskConfig: {
bootDiskSizeGb: 0,
bootDiskType: "string",
numLocalSsds: 0,
},
imageUri: "string",
machineTypeUri: "string",
minCpuPlatform: "string",
numInstances: 0,
preemptibility: google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
},
securityConfig: {
kerberosConfig: {
crossRealmTrustAdminServer: "string",
crossRealmTrustKdc: "string",
crossRealmTrustRealm: "string",
crossRealmTrustSharedPasswordUri: "string",
enableKerberos: false,
kdcDbKeyUri: "string",
keyPasswordUri: "string",
keystorePasswordUri: "string",
keystoreUri: "string",
kmsKeyUri: "string",
realm: "string",
rootPrincipalPasswordUri: "string",
tgtLifetimeHours: 0,
truststorePasswordUri: "string",
truststoreUri: "string",
},
},
softwareConfig: {
imageVersion: "string",
optionalComponents: [google_native.dataproc.v1beta2.SoftwareConfigOptionalComponentsItem.ComponentUnspecified],
properties: {
string: "string",
},
},
tempBucket: "string",
workerConfig: {
accelerators: [{
acceleratorCount: 0,
acceleratorTypeUri: "string",
}],
diskConfig: {
bootDiskSizeGb: 0,
bootDiskType: "string",
numLocalSsds: 0,
},
imageUri: "string",
machineTypeUri: "string",
minCpuPlatform: "string",
numInstances: 0,
preemptibility: google_native.dataproc.v1beta2.InstanceGroupConfigPreemptibility.PreemptibilityUnspecified,
},
},
labels: {
string: "string",
},
},
},
dagTimeout: "string",
labels: {
string: "string",
},
location: "string",
parameters: [{
fields: ["string"],
name: "string",
description: "string",
validation: {
regex: {
regexes: ["string"],
},
values: {
values: ["string"],
},
},
}],
project: "string",
version: 0,
});
type: google-native:dataproc/v1beta2:WorkflowTemplate
properties:
dagTimeout: string
id: string
jobs:
- hadoopJob:
archiveUris:
- string
args:
- string
fileUris:
- string
jarFileUris:
- string
loggingConfig:
driverLogLevels:
string: string
mainClass: string
mainJarFileUri: string
properties:
string: string
hiveJob:
continueOnFailure: false
jarFileUris:
- string
properties:
string: string
queryFileUri: string
queryList:
queries:
- string
scriptVariables:
string: string
labels:
string: string
pigJob:
continueOnFailure: false
jarFileUris:
- string
loggingConfig:
driverLogLevels:
string: string
properties:
string: string
queryFileUri: string
queryList:
queries:
- string
scriptVariables:
string: string
prerequisiteStepIds:
- string
prestoJob:
clientTags:
- string
continueOnFailure: false
loggingConfig:
driverLogLevels:
string: string
outputFormat: string
properties:
string: string
queryFileUri: string
queryList:
queries:
- string
pysparkJob:
archiveUris:
- string
args:
- string
fileUris:
- string
jarFileUris:
- string
loggingConfig:
driverLogLevels:
string: string
mainPythonFileUri: string
properties:
string: string
pythonFileUris:
- string
scheduling:
maxFailuresPerHour: 0
maxFailuresTotal: 0
sparkJob:
archiveUris:
- string
args:
- string
fileUris:
- string
jarFileUris:
- string
loggingConfig:
driverLogLevels:
string: string
mainClass: string
mainJarFileUri: string
properties:
string: string
sparkRJob:
archiveUris:
- string
args:
- string
fileUris:
- string
loggingConfig:
driverLogLevels:
string: string
mainRFileUri: string
properties:
string: string
sparkSqlJob:
jarFileUris:
- string
loggingConfig:
driverLogLevels:
string: string
properties:
string: string
queryFileUri: string
queryList:
queries:
- string
scriptVariables:
string: string
stepId: string
labels:
string: string
location: string
parameters:
- description: string
fields:
- string
name: string
validation:
regex:
regexes:
- string
values:
values:
- string
placement:
clusterSelector:
clusterLabels:
string: string
zone: string
managedCluster:
clusterName: string
config:
autoscalingConfig:
policyUri: string
configBucket: string
encryptionConfig:
gcePdKmsKeyName: string
endpointConfig:
enableHttpPortAccess: false
gceClusterConfig:
internalIpOnly: false
metadata:
string: string
networkUri: string
nodeGroupAffinity:
nodeGroupUri: string
privateIpv6GoogleAccess: PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
reservationAffinity:
consumeReservationType: TYPE_UNSPECIFIED
key: string
values:
- string
serviceAccount: string
serviceAccountScopes:
- string
shieldedInstanceConfig:
enableIntegrityMonitoring: false
enableSecureBoot: false
enableVtpm: false
subnetworkUri: string
tags:
- string
zoneUri: string
gkeClusterConfig:
namespacedGkeDeploymentTarget:
clusterNamespace: string
targetGkeCluster: string
initializationActions:
- executableFile: string
executionTimeout: string
lifecycleConfig:
autoDeleteTime: string
autoDeleteTtl: string
idleDeleteTtl: string
masterConfig:
accelerators:
- acceleratorCount: 0
acceleratorTypeUri: string
diskConfig:
bootDiskSizeGb: 0
bootDiskType: string
numLocalSsds: 0
imageUri: string
machineTypeUri: string
minCpuPlatform: string
numInstances: 0
preemptibility: PREEMPTIBILITY_UNSPECIFIED
metastoreConfig:
dataprocMetastoreService: string
secondaryWorkerConfig:
accelerators:
- acceleratorCount: 0
acceleratorTypeUri: string
diskConfig:
bootDiskSizeGb: 0
bootDiskType: string
numLocalSsds: 0
imageUri: string
machineTypeUri: string
minCpuPlatform: string
numInstances: 0
preemptibility: PREEMPTIBILITY_UNSPECIFIED
securityConfig:
kerberosConfig:
crossRealmTrustAdminServer: string
crossRealmTrustKdc: string
crossRealmTrustRealm: string
crossRealmTrustSharedPasswordUri: string
enableKerberos: false
kdcDbKeyUri: string
keyPasswordUri: string
keystorePasswordUri: string
keystoreUri: string
kmsKeyUri: string
realm: string
rootPrincipalPasswordUri: string
tgtLifetimeHours: 0
truststorePasswordUri: string
truststoreUri: string
softwareConfig:
imageVersion: string
optionalComponents:
- COMPONENT_UNSPECIFIED
properties:
string: string
tempBucket: string
workerConfig:
accelerators:
- acceleratorCount: 0
acceleratorTypeUri: string
diskConfig:
bootDiskSizeGb: 0
bootDiskType: string
numLocalSsds: 0
imageUri: string
machineTypeUri: string
minCpuPlatform: string
numInstances: 0
preemptibility: PREEMPTIBILITY_UNSPECIFIED
labels:
string: string
project: string
version: 0
WorkflowTemplate Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The WorkflowTemplate resource accepts the following input properties:
- Id string
- The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters..
- Jobs
List<Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Ordered Job> - The Directed Acyclic Graph of Jobs to submit.
- Placement
Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Workflow Template Placement - WorkflowTemplate scheduling information.
- Dag
Timeout string - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- Location string
- Parameters
List<Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Template Parameter> - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- Project string
- Version int
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- Id string
- The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters..
- Jobs
[]Ordered
Job Args - The Directed Acyclic Graph of Jobs to submit.
- Placement
Workflow
Template Placement Args - WorkflowTemplate scheduling information.
- Dag
Timeout string - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- Labels map[string]string
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- Location string
- Parameters
[]Template
Parameter Args - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- Project string
- Version int
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- id String
- The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters..
- jobs
List<Ordered
Job> - The Directed Acyclic Graph of Jobs to submit.
- placement
Workflow
Template Placement - WorkflowTemplate scheduling information.
- dag
Timeout String - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- labels Map<String,String>
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- location String
- parameters
List<Template
Parameter> - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- project String
- version Integer
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- id string
- The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters..
- jobs
Ordered
Job[] - The Directed Acyclic Graph of Jobs to submit.
- placement
Workflow
Template Placement - WorkflowTemplate scheduling information.
- dag
Timeout string - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- labels {[key: string]: string}
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- location string
- parameters
Template
Parameter[] - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- project string
- version number
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- id str
- The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters..
- jobs
Sequence[Ordered
Job Args] - The Directed Acyclic Graph of Jobs to submit.
- placement
Workflow
Template Placement Args - WorkflowTemplate scheduling information.
- dag_
timeout str - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- labels Mapping[str, str]
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- location str
- parameters
Sequence[Template
Parameter Args] - Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- project str
- version int
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
- id String
- The template id.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters..
- jobs List<Property Map>
- The Directed Acyclic Graph of Jobs to submit.
- placement Property Map
- WorkflowTemplate scheduling information.
- dag
Timeout String - Optional. Timeout duration for the DAG of jobs, expressed in seconds (see JSON representation of duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). The timeout duration must be from 10 minutes ("600s") to 24 hours ("86400s"). The timer begins when the first job is submitted. If the workflow is running at the end of the timeout period, any remaining jobs are cancelled, the workflow is ended, and if the workflow was running on a managed cluster, the cluster is deleted.
- labels Map<String>
- Optional. The labels to associate with this template. These labels will be propagated to all jobs and clusters created by the workflow instance.Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt).No more than 32 labels can be associated with a template.
- location String
- parameters List<Property Map>
- Optional. Template parameters whose values are substituted into the template. Values for parameters must be provided when the template is instantiated.
- project String
- version Number
- Optional. Used to perform a consistent read-modify-write.This field should be left blank for a CreateWorkflowTemplate request. It is required for an UpdateWorkflowTemplate request, and must match the current server version. A typical update template flow would fetch the current template with a GetWorkflowTemplate request, which will return the current template with the version field filled in with the current server version. The user updates other fields in the template, then returns it as part of the UpdateWorkflowTemplate request.
Outputs
All input properties are implicitly available as output properties. Additionally, the WorkflowTemplate resource produces the following output properties:
- Create
Time string - The time template was created.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- Update
Time string - The time template was last updated.
- Create
Time string - The time template was created.
- Id string
- The provider-assigned unique ID for this managed resource.
- Name string
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- Update
Time string - The time template was last updated.
- create
Time String - The time template was created.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- update
Time String - The time template was last updated.
- create
Time string - The time template was created.
- id string
- The provider-assigned unique ID for this managed resource.
- name string
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- update
Time string - The time template was last updated.
- create_
time str - The time template was created.
- id str
- The provider-assigned unique ID for this managed resource.
- name str
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- update_
time str - The time template was last updated.
- create
Time String - The time template was created.
- id String
- The provider-assigned unique ID for this managed resource.
- name String
- The resource name of the workflow template, as described in https://cloud.google.com/apis/design/resource_names. For projects.regions.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/regions/{region}/workflowTemplates/{template_id} For projects.locations.workflowTemplates, the resource name of the template has the following format: projects/{project_id}/locations/{location}/workflowTemplates/{template_id}
- update
Time String - The time template was last updated.
Supporting Types
AcceleratorConfig, AcceleratorConfigArgs
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Integer - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator_
count int - The number of the accelerator cards of this type exposed to this instance.
- accelerator_
type_ struri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AcceleratorConfigResponse, AcceleratorConfigResponseArgs
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- Accelerator
Count int - The number of the accelerator cards of this type exposed to this instance.
- Accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Integer - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type stringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator_
count int - The number of the accelerator cards of this type exposed to this instance.
- accelerator_
type_ struri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
- accelerator
Count Number - The number of the accelerator cards of this type exposed to this instance.
- accelerator
Type StringUri - Full URL, partial URI, or short name of the accelerator type resource to expose to this instance. See Compute Engine AcceleratorTypes (https://cloud.google.com/compute/docs/reference/beta/acceleratorTypes)Examples * https://www.googleapis.com/compute/beta/projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * projects/[project_id]/zones/us-east1-a/acceleratorTypes/nvidia-tesla-k80 * nvidia-tesla-k80Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the accelerator type resource, for example, nvidia-tesla-k80.
AutoscalingConfig, AutoscalingConfigArgs
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy_
uri str - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
AutoscalingConfigResponse, AutoscalingConfigResponseArgs
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- Policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri string - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy_
uri str - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
- policy
Uri String - Optional. The autoscaling policy used by the cluster.Only resource names including projectid and location (region) are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id] projects/[project_id]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]Note that the policy must be in the same project and Dataproc region.
ClusterConfig, ClusterConfigArgs
- Autoscaling
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Autoscaling Config - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Encryption
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Encryption Config - Optional. Encryption settings for the cluster.
- Endpoint
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Endpoint Config - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Gce Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Gke Cluster Config - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions List<Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Node Initialization Action> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Lifecycle Config - Optional. The config setting for auto delete cluster schedule.
- Master
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config - Optional. The Compute Engine config settings for the master instance in a cluster.
- Metastore
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Metastore Config - Optional. Metastore configuration.
- Secondary
Worker Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- Security
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Security Config - Optional. Security related configuration.
- Software
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Software Config - Optional. The config settings for software inside the cluster.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Worker
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config - Optional. The Compute Engine config settings for worker instances in a cluster.
- Autoscaling
Config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Encryption
Config EncryptionConfig - Optional. Encryption settings for the cluster.
- Endpoint
Config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster GceConfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster GkeConfig Cluster Config - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions []NodeInitialization Action - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config LifecycleConfig - Optional. The config setting for auto delete cluster schedule.
- Master
Config InstanceGroup Config - Optional. The Compute Engine config settings for the master instance in a cluster.
- Metastore
Config MetastoreConfig - Optional. Metastore configuration.
- Secondary
Worker InstanceConfig Group Config - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- Security
Config SecurityConfig - Optional. Security related configuration.
- Software
Config SoftwareConfig - Optional. The config settings for software inside the cluster.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Worker
Config InstanceGroup Config - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling
Config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption
Config EncryptionConfig - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<NodeInitialization Action> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig - Optional. The config setting for auto delete cluster schedule.
- master
Config InstanceGroup Config - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore
Config MetastoreConfig - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security
Config SecurityConfig - Optional. Security related configuration.
- software
Config SoftwareConfig - Optional. The config settings for software inside the cluster.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling
Config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption
Config EncryptionConfig - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions NodeInitialization Action[] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig - Optional. The config setting for auto delete cluster schedule.
- master
Config InstanceGroup Config - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore
Config MetastoreConfig - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security
Config SecurityConfig - Optional. Security related configuration.
- software
Config SoftwareConfig - Optional. The config settings for software inside the cluster.
- temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling_
config AutoscalingConfig - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config_
bucket str - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption_
config EncryptionConfig - Optional. Encryption settings for the cluster.
- endpoint_
config EndpointConfig - Optional. Port/endpoint configuration for this cluster
- gce_
cluster_ Gceconfig Cluster Config - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke_
cluster_ Gkeconfig Cluster Config - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization_
actions Sequence[NodeInitialization Action] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle_
config LifecycleConfig - Optional. The config setting for auto delete cluster schedule.
- master_
config InstanceGroup Config - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore_
config MetastoreConfig - Optional. Metastore configuration.
- secondary_
worker_ Instanceconfig Group Config - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security_
config SecurityConfig - Optional. Security related configuration.
- software_
config SoftwareConfig - Optional. The config settings for software inside the cluster.
- temp_
bucket str - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker_
config InstanceGroup Config - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling
Config Property Map - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption
Config Property Map - Optional. Encryption settings for the cluster.
- endpoint
Config Property Map - Optional. Port/endpoint configuration for this cluster
- gce
Cluster Property MapConfig - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster Property MapConfig - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<Property Map> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config Property Map - Optional. The config setting for auto delete cluster schedule.
- master
Config Property Map - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore
Config Property Map - Optional. Metastore configuration.
- secondary
Worker Property MapConfig - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security
Config Property Map - Optional. Security related configuration.
- software
Config Property Map - Optional. The config settings for software inside the cluster.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker
Config Property Map - Optional. The Compute Engine config settings for worker instances in a cluster.
ClusterConfigResponse, ClusterConfigResponseArgs
- Autoscaling
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Autoscaling Config Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Encryption
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Encryption Config Response - Optional. Encryption settings for the cluster.
- Endpoint
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Endpoint Config Response - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Gce Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Gke Cluster Config Response - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions List<Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Node Initialization Action Response> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Lifecycle Config Response - Optional. The config setting for auto delete cluster schedule.
- Master
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for the master instance in a cluster.
- Metastore
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Metastore Config Response - Optional. Metastore configuration.
- Secondary
Worker Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- Security
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Security Config Response - Optional. Security related configuration.
- Software
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Software Config Response - Optional. The config settings for software inside the cluster.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Worker
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Group Config Response - Optional. The Compute Engine config settings for worker instances in a cluster.
- Autoscaling
Config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- Config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Encryption
Config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- Endpoint
Config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- Gce
Cluster GceConfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- Gke
Cluster GkeConfig Cluster Config Response - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- Initialization
Actions []NodeInitialization Action Response - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- Lifecycle
Config LifecycleConfig Response - Optional. The config setting for auto delete cluster schedule.
- Master
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the master instance in a cluster.
- Metastore
Config MetastoreConfig Response - Optional. Metastore configuration.
- Secondary
Worker InstanceConfig Group Config Response - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- Security
Config SecurityConfig Response - Optional. Security related configuration.
- Software
Config SoftwareConfig Response - Optional. The config settings for software inside the cluster.
- Temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- Worker
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling
Config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption
Config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config Response - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<NodeInitialization Action Response> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig Response - Optional. The config setting for auto delete cluster schedule.
- master
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore
Config MetastoreConfig Response - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config Response - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security
Config SecurityConfig Response - Optional. Security related configuration.
- software
Config SoftwareConfig Response - Optional. The config settings for software inside the cluster.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling
Config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config
Bucket string - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption
Config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint
Config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce
Cluster GceConfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster GkeConfig Cluster Config Response - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions NodeInitialization Action Response[] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config LifecycleConfig Response - Optional. The config setting for auto delete cluster schedule.
- master
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore
Config MetastoreConfig Response - Optional. Metastore configuration.
- secondary
Worker InstanceConfig Group Config Response - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security
Config SecurityConfig Response - Optional. Security related configuration.
- software
Config SoftwareConfig Response - Optional. The config settings for software inside the cluster.
- temp
Bucket string - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker
Config InstanceGroup Config Response - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling_
config AutoscalingConfig Response - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config_
bucket str - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption_
config EncryptionConfig Response - Optional. Encryption settings for the cluster.
- endpoint_
config EndpointConfig Response - Optional. Port/endpoint configuration for this cluster
- gce_
cluster_ Gceconfig Cluster Config Response - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke_
cluster_ Gkeconfig Cluster Config Response - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization_
actions Sequence[NodeInitialization Action Response] - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle_
config LifecycleConfig Response - Optional. The config setting for auto delete cluster schedule.
- master_
config InstanceGroup Config Response - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore_
config MetastoreConfig Response - Optional. Metastore configuration.
- secondary_
worker_ Instanceconfig Group Config Response - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security_
config SecurityConfig Response - Optional. Security related configuration.
- software_
config SoftwareConfig Response - Optional. The config settings for software inside the cluster.
- temp_
bucket str - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker_
config InstanceGroup Config Response - Optional. The Compute Engine config settings for worker instances in a cluster.
- autoscaling
Config Property Map - Optional. Autoscaling config for the policy associated with the cluster. Cluster does not autoscale if this field is unset.
- config
Bucket String - Optional. A Cloud Storage bucket used to stage job dependencies, config files, and job driver console output. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's staging bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket (see Dataproc staging bucket (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/staging-bucket)). This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- encryption
Config Property Map - Optional. Encryption settings for the cluster.
- endpoint
Config Property Map - Optional. Port/endpoint configuration for this cluster
- gce
Cluster Property MapConfig - Optional. The shared Compute Engine config settings for all instances in a cluster.
- gke
Cluster Property MapConfig - Optional. The Kubernetes Engine config for Dataproc clusters deployed to Kubernetes. Setting this is considered mutually exclusive with Compute Engine-based options such as gce_cluster_config, master_config, worker_config, secondary_worker_config, and autoscaling_config.
- initialization
Actions List<Property Map> - Optional. Commands to execute on each node after config is completed. By default, executables are run on master and all worker nodes. You can test a node's role metadata to run an executable on a master or worker node, as shown below using curl (you can also use wget): ROLE=$(curl -H Metadata-Flavor:Google http://metadata/computeMetadata/v1beta2/instance/attributes/dataproc-role) if [[ "${ROLE}" == 'Master' ]]; then ... master specific actions ... else ... worker specific actions ... fi
- lifecycle
Config Property Map - Optional. The config setting for auto delete cluster schedule.
- master
Config Property Map - Optional. The Compute Engine config settings for the master instance in a cluster.
- metastore
Config Property Map - Optional. Metastore configuration.
- secondary
Worker Property MapConfig - Optional. The Compute Engine config settings for additional worker instances in a cluster.
- security
Config Property Map - Optional. Security related configuration.
- software
Config Property Map - Optional. The config settings for software inside the cluster.
- temp
Bucket String - Optional. A Cloud Storage bucket used to store ephemeral cluster and jobs data, such as Spark and MapReduce history files. If you do not specify a temp bucket, Dataproc will determine a Cloud Storage location (US, ASIA, or EU) for your cluster's temp bucket according to the Compute Engine zone where your cluster is deployed, and then create and manage this project-level, per-location bucket. The default bucket has a TTL of 90 days, but you can use any TTL (or none) if you specify a bucket. This field requires a Cloud Storage bucket name, not a URI to a Cloud Storage bucket.
- worker
Config Property Map - Optional. The Compute Engine config settings for worker instances in a cluster.
ClusterSelector, ClusterSelectorArgs
- Cluster
Labels Dictionary<string, string> - The cluster labels. Cluster must have all labels to match.
- Zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- Cluster
Labels map[string]string - The cluster labels. Cluster must have all labels to match.
- Zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels Map<String,String> - The cluster labels. Cluster must have all labels to match.
- zone String
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels {[key: string]: string} - The cluster labels. Cluster must have all labels to match.
- zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster_
labels Mapping[str, str] - The cluster labels. Cluster must have all labels to match.
- zone str
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels Map<String> - The cluster labels. Cluster must have all labels to match.
- zone String
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
ClusterSelectorResponse, ClusterSelectorResponseArgs
- Cluster
Labels Dictionary<string, string> - The cluster labels. Cluster must have all labels to match.
- Zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- Cluster
Labels map[string]string - The cluster labels. Cluster must have all labels to match.
- Zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels Map<String,String> - The cluster labels. Cluster must have all labels to match.
- zone String
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels {[key: string]: string} - The cluster labels. Cluster must have all labels to match.
- zone string
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster_
labels Mapping[str, str] - The cluster labels. Cluster must have all labels to match.
- zone str
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
- cluster
Labels Map<String> - The cluster labels. Cluster must have all labels to match.
- zone String
- Optional. The zone where workflow process executes. This parameter does not affect the selection of the cluster.If unspecified, the zone of the first cluster matching the selector is used.
DiskConfig, DiskConfigArgs
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Num
Local intSsds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Num
Local intSsds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot
Disk IntegerSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num
Local IntegerSsds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot
Disk numberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num
Local numberSsds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot_
disk_ intsize_ gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot_
disk_ strtype - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num_
local_ intssds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot
Disk NumberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num
Local NumberSsds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
DiskConfigResponse, DiskConfigResponseArgs
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Num
Local intSsds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- Boot
Disk intSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- Boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- Num
Local intSsds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot
Disk IntegerSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num
Local IntegerSsds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot
Disk numberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk stringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num
Local numberSsds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot_
disk_ intsize_ gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot_
disk_ strtype - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num_
local_ intssds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
- boot
Disk NumberSize Gb - Optional. Size in GB of the boot disk (default is 500GB).
- boot
Disk StringType - Optional. Type of the boot disk (default is "pd-standard"). Valid values: "pd-balanced" (Persistent Disk Balanced Solid State Drive), "pd-ssd" (Persistent Disk Solid State Drive), or "pd-standard" (Persistent Disk Hard Disk Drive). See Disk types (https://cloud.google.com/compute/docs/disks#disk-types).
- num
Local NumberSsds - Number of attached SSDs, from 0 to 4 (default is 0). If SSDs are not attached, the boot disk is used to store runtime logs and HDFS (https://hadoop.apache.org/docs/r1.2.1/hdfs_user_guide.html) data. If one or more SSDs are attached, this runtime bulk data is spread across them, and the boot disk contains only basic config and installed binaries.
EncryptionConfig, EncryptionConfigArgs
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce_
pd_ strkms_ key_ name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
EncryptionConfigResponse, EncryptionConfigResponseArgs
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- Gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce
Pd stringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce_
pd_ strkms_ key_ name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
- gce
Pd StringKms Key Name - Optional. The Cloud KMS key name to use for PD disk encryption for all instances in the cluster.
EndpointConfig, EndpointConfigArgs
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable
Http booleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable_
http_ boolport_ access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
EndpointConfigResponse, EndpointConfigResponseArgs
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Http
Ports Dictionary<string, string> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- Enable
Http boolPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- Http
Ports map[string]string - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports Map<String,String> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http booleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports {[key: string]: string} - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable_
http_ boolport_ access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http_
ports Mapping[str, str] - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
- enable
Http BooleanPort Access - Optional. If true, enable http access to specific ports on the cluster from external sources. Defaults to false.
- http
Ports Map<String> - The map of port descriptions to URLs. Will only be populated if enable_http_port_access is true.
GceClusterConfig, GceClusterConfigArgs
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata Dictionary<string, string>
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- Node
Group Pulumi.Affinity Google Native. Dataproc. V1Beta2. Inputs. Node Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google Pulumi.Access Google Native. Dataproc. V1Beta2. Gce Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Reservation Affinity - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account List<string>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Shielded Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- List<string>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata map[string]string
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- Node
Group NodeAffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google GceAccess Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account []stringScopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance ShieldedConfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- []string
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String,String>
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node
Group NodeAffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google GceAccess Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal
Ip booleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata {[key: string]: string}
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node
Group NodeAffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google GceAccess Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account string[]Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- string[]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri string - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal_
ip_ boolonly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Mapping[str, str]
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network_
uri str - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node_
group_ Nodeaffinity Group Affinity - Optional. Node Group Affinity for sole-tenant clusters.
- private_
ipv6_ Gcegoogle_ access Cluster Config Private Ipv6Google Access - Optional. The type of IPv6 access for a cluster.
- reservation_
affinity ReservationAffinity - Optional. Reservation Affinity for consuming Zonal reservation.
- service_
account str - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service_
account_ Sequence[str]scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded_
instance_ Shieldedconfig Instance Config - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork_
uri str - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- Sequence[str]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone_
uri str - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String>
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node
Group Property MapAffinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED" | "INHERIT_FROM_SUBNETWORK" | "OUTBOUND" | "BIDIRECTIONAL"Access - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity Property Map - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance Property MapConfig - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
GceClusterConfigPrivateIpv6GoogleAccess, GceClusterConfigPrivateIpv6GoogleAccessArgs
- Private
Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Inherit
From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- Gce
Cluster Config Private Ipv6Google Access Private Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Gce
Cluster Config Private Ipv6Google Access Inherit From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Gce
Cluster Config Private Ipv6Google Access Outbound - OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Gce
Cluster Config Private Ipv6Google Access Bidirectional - BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- Private
Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Inherit
From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- Private
Ipv6Google Access Unspecified - PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- Inherit
From Subnetwork - INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- Outbound
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- Bidirectional
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- INHERIT_FROM_SUBNETWORK
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- OUTBOUND
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- BIDIRECTIONAL
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
- "PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIED"
- PRIVATE_IPV6_GOOGLE_ACCESS_UNSPECIFIEDIf unspecified, Compute Engine default behavior will apply, which is the same as INHERIT_FROM_SUBNETWORK.
- "INHERIT_FROM_SUBNETWORK"
- INHERIT_FROM_SUBNETWORKPrivate access to and from Google Services configuration inherited from the subnetwork configuration. This is the default Compute Engine behavior.
- "OUTBOUND"
- OUTBOUNDEnables outbound private IPv6 access to Google Services from the Dataproc cluster.
- "BIDIRECTIONAL"
- BIDIRECTIONALEnables bidirectional private IPv6 access between Google Services and the Dataproc cluster.
GceClusterConfigResponse, GceClusterConfigResponseArgs
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata Dictionary<string, string>
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- Node
Group Pulumi.Affinity Google Native. Dataproc. V1Beta2. Inputs. Node Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google stringAccess - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Reservation Affinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account List<string>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Shielded Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- List<string>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- Internal
Ip boolOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- Metadata map[string]string
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- Network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- Node
Group NodeAffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- Private
Ipv6Google stringAccess - Optional. The type of IPv6 access for a cluster.
- Reservation
Affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- Service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- Service
Account []stringScopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- Shielded
Instance ShieldedConfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- Subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- []string
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- Zone
Uri string - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String,String>
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node
Group NodeAffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google StringAccess - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal
Ip booleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata {[key: string]: string}
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri string - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node
Group NodeAffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google stringAccess - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account string - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account string[]Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance ShieldedConfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri string - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- string[]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri string - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal_
ip_ boolonly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Mapping[str, str]
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network_
uri str - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node_
group_ Nodeaffinity Group Affinity Response - Optional. Node Group Affinity for sole-tenant clusters.
- private_
ipv6_ strgoogle_ access - Optional. The type of IPv6 access for a cluster.
- reservation_
affinity ReservationAffinity Response - Optional. Reservation Affinity for consuming Zonal reservation.
- service_
account str - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service_
account_ Sequence[str]scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded_
instance_ Shieldedconfig Instance Config Response - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork_
uri str - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- Sequence[str]
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone_
uri str - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
- internal
Ip BooleanOnly - Optional. If true, all instances in the cluster will only have internal IP addresses. By default, clusters are not restricted to internal IP addresses, and will have ephemeral external IP addresses assigned to each instance. This internal_ip_only restriction can only be enabled for subnetwork enabled networks, and all off-cluster dependencies must be configured to be accessible without external IP addresses.
- metadata Map<String>
- The Compute Engine metadata entries to add to all instances (see Project and instance metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#project_and_instance_metadata)).
- network
Uri String - Optional. The Compute Engine network to be used for machine communications. Cannot be specified with subnetwork_uri. If neither network_uri nor subnetwork_uri is specified, the "default" network of the project is used, if it exists. Cannot be a "Custom Subnet Network" (see Using Subnetworks (https://cloud.google.com/compute/docs/subnetworks) for more information).A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/global/default projects/[project_id]/regions/global/default default
- node
Group Property MapAffinity - Optional. Node Group Affinity for sole-tenant clusters.
- private
Ipv6Google StringAccess - Optional. The type of IPv6 access for a cluster.
- reservation
Affinity Property Map - Optional. Reservation Affinity for consuming Zonal reservation.
- service
Account String - Optional. The Dataproc service account (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/service-accounts#service_accounts_in_dataproc) (also see VM Data Plane identity (https://cloud.google.com/dataproc/docs/concepts/iam/dataproc-principals#vm_service_account_data_plane_identity)) used by Dataproc cluster VM instances to access Google Cloud Platform services.If not specified, the Compute Engine default service account (https://cloud.google.com/compute/docs/access/service-accounts#default_service_account) is used.
- service
Account List<String>Scopes - Optional. The URIs of service account scopes to be included in Compute Engine instances. The following base set of scopes is always included: https://www.googleapis.com/auth/cloud.useraccounts.readonly https://www.googleapis.com/auth/devstorage.read_write https://www.googleapis.com/auth/logging.writeIf no scopes are specified, the following defaults are also provided: https://www.googleapis.com/auth/bigquery https://www.googleapis.com/auth/bigtable.admin.table https://www.googleapis.com/auth/bigtable.data https://www.googleapis.com/auth/devstorage.full_control
- shielded
Instance Property MapConfig - Optional. Shielded Instance Config for clusters using Compute Engine Shielded VMs (https://cloud.google.com/security/shielded-cloud/shielded-vm).
- subnetwork
Uri String - Optional. The Compute Engine subnetwork to be used for machine communications. Cannot be specified with network_uri.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/regions/us-east1/subnetworks/sub0 projects/[project_id]/regions/us-east1/subnetworks/sub0 sub0
- List<String>
- The Compute Engine tags to add to all instances (see Tagging instances (https://cloud.google.com/compute/docs/label-or-tag-resources#tags)).
- zone
Uri String - Optional. The zone where the Compute Engine cluster will be located. On a create request, it is required in the "global" region. If omitted in a non-global Dataproc region, the service will pick a zone in the corresponding Compute Engine region. On a get request, zone will always be present.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/[zone] projects/[project_id]/zones/[zone] us-central1-f
GkeClusterConfig, GkeClusterConfigArgs
- Namespaced
Gke Pulumi.Deployment Target Google Native. Dataproc. V1Beta2. Inputs. Namespaced Gke Deployment Target - Optional. A target for the deployment.
- Namespaced
Gke NamespacedDeployment Target Gke Deployment Target - Optional. A target for the deployment.
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target - Optional. A target for the deployment.
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target - Optional. A target for the deployment.
- namespaced_
gke_ Namespaceddeployment_ target Gke Deployment Target - Optional. A target for the deployment.
- namespaced
Gke Property MapDeployment Target - Optional. A target for the deployment.
GkeClusterConfigResponse, GkeClusterConfigResponseArgs
- Namespaced
Gke Pulumi.Deployment Target Google Native. Dataproc. V1Beta2. Inputs. Namespaced Gke Deployment Target Response - Optional. A target for the deployment.
- Namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response - Optional. A target for the deployment.
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response - Optional. A target for the deployment.
- namespaced
Gke NamespacedDeployment Target Gke Deployment Target Response - Optional. A target for the deployment.
- namespaced_
gke_ Namespaceddeployment_ target Gke Deployment Target Response - Optional. A target for the deployment.
- namespaced
Gke Property MapDeployment Target - Optional. A target for the deployment.
HadoopJob, HadoopJobArgs
- Archive
Uris List<string> - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris List<string> - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- Jar
File List<string>Uris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- Archive
Uris []string - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris []string - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- Jar
File []stringUris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- Logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar StringFile Uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive
Uris string[] - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris string[] - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar
File string[]Uris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- main
Class string - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar stringFile Uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive_
uris Sequence[str] - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris Sequence[str] - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar_
file_ Sequence[str]uris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging_
config LoggingConfig - Optional. The runtime log config for job execution.
- main_
class str - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main_
jar_ strfile_ uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar StringFile Uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
HadoopJobResponse, HadoopJobResponseArgs
- Archive
Uris List<string> - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris List<string> - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- Jar
File List<string>Uris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- Archive
Uris []string - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris []string - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- Jar
File []stringUris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- Logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar StringFile Uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive
Uris string[] - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris string[] - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar
File string[]Uris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Class string - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar stringFile Uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive_
uris Sequence[str] - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris Sequence[str] - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar_
file_ Sequence[str]uris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging_
config LoggingConfig Response - Optional. The runtime log config for job execution.
- main_
class str - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main_
jar_ strfile_ uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted in the working directory of Hadoop drivers and tasks. Supported file types: .jar, .tar, .tar.gz, .tgz, or .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as -libjars or -Dfoo=bar, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS (Hadoop Compatible Filesystem) URIs of files to be copied to the working directory of Hadoop drivers and distributed tasks. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file containing the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar StringFile Uri - The HCFS URI of the jar file containing the main class. Examples: 'gs://foo-bucket/analytics-binaries/extract-useful-metrics-mr.jar' 'hdfs:/tmp/test-samples/custom-wordcount.jar' 'file:///home/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar'
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Hadoop. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site and classes in user code.
HiveJob, HiveJobArgs
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- Properties Dictionary<string, string>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- Query
File stringUri - The HCFS URI of the script that contains Hive queries.
- Query
List Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List - A list of queries.
- Script
Variables Dictionary<string, string> - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File []stringUris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- Properties map[string]string
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- Query
File stringUri - The HCFS URI of the script that contains Hive queries.
- Query
List QueryList - A list of queries.
- Script
Variables map[string]string - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Map<String,String>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query
File StringUri - The HCFS URI of the script that contains Hive queries.
- query
List QueryList - A list of queries.
- script
Variables Map<String,String> - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue
On booleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File string[]Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties {[key: string]: string}
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query
File stringUri - The HCFS URI of the script that contains Hive queries.
- query
List QueryList - A list of queries.
- script
Variables {[key: string]: string} - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue_
on_ boolfailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Mapping[str, str]
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query_
file_ struri - The HCFS URI of the script that contains Hive queries.
- query_
list QueryList - A list of queries.
- script_
variables Mapping[str, str] - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Map<String>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query
File StringUri - The HCFS URI of the script that contains Hive queries.
- query
List Property Map - A list of queries.
- script
Variables Map<String> - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
HiveJobResponse, HiveJobResponseArgs
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- Properties Dictionary<string, string>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- Query
File stringUri - The HCFS URI of the script that contains Hive queries.
- Query
List Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response - A list of queries.
- Script
Variables Dictionary<string, string> - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File []stringUris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- Properties map[string]string
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- Query
File stringUri - The HCFS URI of the script that contains Hive queries.
- Query
List QueryList Response - A list of queries.
- Script
Variables map[string]string - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Map<String,String>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query
File StringUri - The HCFS URI of the script that contains Hive queries.
- query
List QueryList Response - A list of queries.
- script
Variables Map<String,String> - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue
On booleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File string[]Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties {[key: string]: string}
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query
File stringUri - The HCFS URI of the script that contains Hive queries.
- query
List QueryList Response - A list of queries.
- script
Variables {[key: string]: string} - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue_
on_ boolfailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Mapping[str, str]
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query_
file_ struri - The HCFS URI of the script that contains Hive queries.
- query_
list QueryList Response - A list of queries.
- script_
variables Mapping[str, str] - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs.
- properties Map<String>
- Optional. A mapping of property names and values, used to configure Hive. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/hive/conf/hive-site.xml, and classes in user code.
- query
File StringUri - The HCFS URI of the script that contains Hive queries.
- query
List Property Map - A list of queries.
- script
Variables Map<String> - Optional. Mapping of query variable names to values (equivalent to the Hive command: SET name="value";).
InstanceGroupConfig, InstanceGroupConfigArgs
- Accelerators
List<Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Accelerator Config> - Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Disk Config - Optional. Disk option config settings.
- Image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Min
Cpu stringPlatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Num
Instances int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility
Pulumi.
Google Native. Dataproc. V1Beta2. Instance Group Config Preemptibility - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- Accelerators
[]Accelerator
Config - Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config DiskConfig - Optional. Disk option config settings.
- Image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Min
Cpu stringPlatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Num
Instances int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility
Instance
Group Config Preemptibility - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
List<Accelerator
Config> - Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config DiskConfig - Optional. Disk option config settings.
- image
Uri String - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- machine
Type StringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- min
Cpu StringPlatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances Integer - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
Instance
Group Config Preemptibility - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
Accelerator
Config[] - Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config DiskConfig - Optional. Disk option config settings.
- image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- min
Cpu stringPlatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances number - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
Instance
Group Config Preemptibility - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
Sequence[Accelerator
Config] - Optional. The Compute Engine accelerator configuration for these instances.
- disk_
config DiskConfig - Optional. Disk option config settings.
- image_
uri str - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- machine_
type_ struri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- min_
cpu_ strplatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num_
instances int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility
Instance
Group Config Preemptibility - Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators List<Property Map>
- Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config Property Map - Optional. Disk option config settings.
- image
Uri String - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- machine
Type StringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- min
Cpu StringPlatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances Number - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility "PREEMPTIBILITY_UNSPECIFIED" | "NON_PREEMPTIBLE" | "PREEMPTIBLE"
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
InstanceGroupConfigPreemptibility, InstanceGroupConfigPreemptibilityArgs
- Preemptibility
Unspecified - PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- Non
Preemptible - NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- Preemptible
- PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.
- Instance
Group Config Preemptibility Preemptibility Unspecified - PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- Instance
Group Config Preemptibility Non Preemptible - NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- Instance
Group Config Preemptibility Preemptible - PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.
- Preemptibility
Unspecified - PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- Non
Preemptible - NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- Preemptible
- PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.
- Preemptibility
Unspecified - PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- Non
Preemptible - NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- Preemptible
- PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.
- PREEMPTIBILITY_UNSPECIFIED
- PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- NON_PREEMPTIBLE
- NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- PREEMPTIBLE
- PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.
- "PREEMPTIBILITY_UNSPECIFIED"
- PREEMPTIBILITY_UNSPECIFIEDPreemptibility is unspecified, the system will choose the appropriate setting for each instance group.
- "NON_PREEMPTIBLE"
- NON_PREEMPTIBLEInstances are non-preemptible.This option is allowed for all instance groups and is the only valid value for Master and Worker instance groups.
- "PREEMPTIBLE"
- PREEMPTIBLEInstances are preemptible.This option is allowed only for secondary worker groups.
InstanceGroupConfigResponse, InstanceGroupConfigResponseArgs
- Accelerators
List<Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Accelerator Config Response> - Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Disk Config Response - Optional. Disk option config settings.
- Image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Instance
Names List<string> - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- Instance
References List<Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Instance Reference Response> - List of references to Compute Engine instances.
- Is
Preemptible bool - Specifies that this instance group contains preemptible instances.
- Machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Managed
Group Pulumi.Config Google Native. Dataproc. V1Beta2. Inputs. Managed Group Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- Min
Cpu stringPlatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Num
Instances int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility string
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- Accelerators
[]Accelerator
Config Response - Optional. The Compute Engine accelerator configuration for these instances.
- Disk
Config DiskConfig Response - Optional. Disk option config settings.
- Image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- Instance
Names []string - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- Instance
References []InstanceReference Response - List of references to Compute Engine instances.
- Is
Preemptible bool - Specifies that this instance group contains preemptible instances.
- Machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- Managed
Group ManagedConfig Group Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- Min
Cpu stringPlatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- Num
Instances int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- Preemptibility string
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
List<Accelerator
Config Response> - Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config DiskConfig Response - Optional. Disk option config settings.
- image
Uri String - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Names List<String> - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance
References List<InstanceReference Response> - List of references to Compute Engine instances.
- is
Preemptible Boolean - Specifies that this instance group contains preemptible instances.
- machine
Type StringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed
Group ManagedConfig Group Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min
Cpu StringPlatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances Integer - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility String
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
Accelerator
Config Response[] - Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config DiskConfig Response - Optional. Disk option config settings.
- image
Uri string - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Names string[] - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance
References InstanceReference Response[] - List of references to Compute Engine instances.
- is
Preemptible boolean - Specifies that this instance group contains preemptible instances.
- machine
Type stringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed
Group ManagedConfig Group Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min
Cpu stringPlatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances number - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility string
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators
Sequence[Accelerator
Config Response] - Optional. The Compute Engine accelerator configuration for these instances.
- disk_
config DiskConfig Response - Optional. Disk option config settings.
- image_
uri str - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance_
names Sequence[str] - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance_
references Sequence[InstanceReference Response] - List of references to Compute Engine instances.
- is_
preemptible bool - Specifies that this instance group contains preemptible instances.
- machine_
type_ struri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed_
group_ Managedconfig Group Config Response - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min_
cpu_ strplatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num_
instances int - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility str
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
- accelerators List<Property Map>
- Optional. The Compute Engine accelerator configuration for these instances.
- disk
Config Property Map - Optional. Disk option config settings.
- image
Uri String - Optional. The Compute Engine image resource used for cluster instances.The URI can represent an image or image family.Image examples: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/[image-id] projects/[project_id]/global/images/[image-id] image-idImage family examples. Dataproc will use the most recent image from the family: https://www.googleapis.com/compute/beta/projects/[project_id]/global/images/family/[custom-image-family-name] projects/[project_id]/global/images/family/[custom-image-family-name]If the URI is unspecified, it will be inferred from SoftwareConfig.image_version or the system default.
- instance
Names List<String> - The list of instance names. Dataproc derives the names from cluster_name, num_instances, and the instance group.
- instance
References List<Property Map> - List of references to Compute Engine instances.
- is
Preemptible Boolean - Specifies that this instance group contains preemptible instances.
- machine
Type StringUri - Optional. The Compute Engine machine type used for cluster instances.A full URL, partial URI, or short name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 projects/[project_id]/zones/us-east1-a/machineTypes/n1-standard-2 n1-standard-2Auto Zone Exception: If you are using the Dataproc Auto Zone Placement (https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/auto-zone#using_auto_zone_placement) feature, you must use the short name of the machine type resource, for example, n1-standard-2.
- managed
Group Property MapConfig - The config for Compute Engine Instance Group Manager that manages this group. This is only used for preemptible instance groups.
- min
Cpu StringPlatform - Specifies the minimum cpu platform for the Instance Group. See Dataproc -> Minimum CPU Platform (https://cloud.google.com/dataproc/docs/concepts/compute/dataproc-min-cpu).
- num
Instances Number - Optional. The number of VM instances in the instance group. For HA cluster master_config groups, must be set to 3. For standard cluster master_config groups, must be set to 1.
- preemptibility String
- Optional. Specifies the preemptibility of the instance group.The default value for master and worker groups is NON_PREEMPTIBLE. This default cannot be changed.The default value for secondary instances is PREEMPTIBLE.
InstanceReferenceResponse, InstanceReferenceResponseArgs
- Instance
Id string - The unique identifier of the Compute Engine instance.
- Instance
Name string - The user-friendly name of the Compute Engine instance.
- Public
Key string - The public key used for sharing data with this instance.
- Instance
Id string - The unique identifier of the Compute Engine instance.
- Instance
Name string - The user-friendly name of the Compute Engine instance.
- Public
Key string - The public key used for sharing data with this instance.
- instance
Id String - The unique identifier of the Compute Engine instance.
- instance
Name String - The user-friendly name of the Compute Engine instance.
- public
Key String - The public key used for sharing data with this instance.
- instance
Id string - The unique identifier of the Compute Engine instance.
- instance
Name string - The user-friendly name of the Compute Engine instance.
- public
Key string - The public key used for sharing data with this instance.
- instance_
id str - The unique identifier of the Compute Engine instance.
- instance_
name str - The user-friendly name of the Compute Engine instance.
- public_
key str - The public key used for sharing data with this instance.
- instance
Id String - The unique identifier of the Compute Engine instance.
- instance
Name String - The user-friendly name of the Compute Engine instance.
- public
Key String - The public key used for sharing data with this instance.
JobScheduling, JobSchedulingArgs
- Max
Failures intPer Hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- Max
Failures intTotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- Max
Failures intPer Hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- Max
Failures intTotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max
Failures IntegerPer Hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max
Failures IntegerTotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max
Failures numberPer Hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max
Failures numberTotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max_
failures_ intper_ hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max_
failures_ inttotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max
Failures NumberPer Hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max
Failures NumberTotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
JobSchedulingResponse, JobSchedulingResponseArgs
- Max
Failures intPer Hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- Max
Failures intTotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- Max
Failures intPer Hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- Max
Failures intTotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max
Failures IntegerPer Hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max
Failures IntegerTotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max
Failures numberPer Hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max
Failures numberTotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max_
failures_ intper_ hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max_
failures_ inttotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
- max
Failures NumberPer Hour - Optional. Maximum number of times per hour a driver may be restarted as a result of driver terminating with non-zero code before job is reported failed.A job may be reported as thrashing if driver exits with non-zero code 4 times within 10 minute window.Maximum value is 10.
- max
Failures NumberTotal - Optional. Maximum number of times in total a driver may be restarted as a result of driver exiting with non-zero code before job is reported failed. Maximum value is 240.
KerberosConfig, KerberosConfigArgs
- Cross
Realm stringTrust Admin Server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- Enable
Kerberos bool - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- Kdc
Db stringKey Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- Key
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Uri string - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Kms
Key stringUri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- Root
Principal stringPassword Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- Tgt
Lifetime intHours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- Truststore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- Truststore
Uri string - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Cross
Realm stringTrust Admin Server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- Enable
Kerberos bool - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- Kdc
Db stringKey Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- Key
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Uri string - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Kms
Key stringUri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- Root
Principal stringPassword Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- Tgt
Lifetime intHours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- Truststore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- Truststore
Uri string - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm StringTrust Admin Server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos Boolean - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db StringKey Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri String - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key StringUri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal StringPassword Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime IntegerHours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri String - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm stringTrust Admin Server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm stringTrust Kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm stringTrust Realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos boolean - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db stringKey Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri string - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key stringUri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal stringPassword Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime numberHours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri string - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross_
realm_ strtrust_ admin_ server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_
realm_ strtrust_ kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_
realm_ strtrust_ realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- str
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable_
kerberos bool - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc_
db_ strkey_ uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key_
password_ struri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore_
password_ struri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore_
uri str - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms_
key_ struri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm str
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root_
principal_ strpassword_ uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt_
lifetime_ inthours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore_
password_ struri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore_
uri str - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm StringTrust Admin Server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos Boolean - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db StringKey Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri String - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key StringUri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal StringPassword Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime NumberHours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri String - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
KerberosConfigResponse, KerberosConfigResponseArgs
- Cross
Realm stringTrust Admin Server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- Enable
Kerberos bool - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- Kdc
Db stringKey Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- Key
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Uri string - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Kms
Key stringUri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- Root
Principal stringPassword Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- Tgt
Lifetime intHours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- Truststore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- Truststore
Uri string - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Cross
Realm stringTrust Admin Server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- Cross
Realm stringTrust Realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- Enable
Kerberos bool - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- Kdc
Db stringKey Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- Key
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- Keystore
Uri string - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- Kms
Key stringUri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- Realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- Root
Principal stringPassword Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- Tgt
Lifetime intHours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- Truststore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- Truststore
Uri string - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm StringTrust Admin Server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos Boolean - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db StringKey Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri String - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key StringUri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal StringPassword Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime IntegerHours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri String - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm stringTrust Admin Server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm stringTrust Kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm stringTrust Realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- string
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos boolean - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db stringKey Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri string - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key stringUri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm string
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal stringPassword Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime numberHours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password stringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri string - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross_
realm_ strtrust_ admin_ server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_
realm_ strtrust_ kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross_
realm_ strtrust_ realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- str
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable_
kerberos bool - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc_
db_ strkey_ uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key_
password_ struri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore_
password_ struri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore_
uri str - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms_
key_ struri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm str
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root_
principal_ strpassword_ uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt_
lifetime_ inthours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore_
password_ struri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore_
uri str - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- cross
Realm StringTrust Admin Server - Optional. The admin server (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Kdc - Optional. The KDC (IP or hostname) for the remote trusted realm in a cross realm trust relationship.
- cross
Realm StringTrust Realm - Optional. The remote realm the Dataproc on-cluster KDC will trust, should the user enable cross realm trust.
- String
- Optional. The Cloud Storage URI of a KMS encrypted file containing the shared password between the on-cluster Kerberos realm and the remote trusted realm, in a cross realm trust relationship.
- enable
Kerberos Boolean - Optional. Flag to indicate whether to Kerberize the cluster (default: false). Set this field to true to enable Kerberos on a cluster.
- kdc
Db StringKey Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the master key of the KDC database.
- key
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided key. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided keystore. For the self-signed certificate, this password is generated by Dataproc.
- keystore
Uri String - Optional. The Cloud Storage URI of the keystore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
- kms
Key StringUri - Optional. The uri of the KMS key used to encrypt various sensitive files.
- realm String
- Optional. The name of the on-cluster Kerberos realm. If not specified, the uppercased domain of hostnames will be the realm.
- root
Principal StringPassword Uri - Optional. The Cloud Storage URI of a KMS encrypted file containing the root principal password.
- tgt
Lifetime NumberHours - Optional. The lifetime of the ticket granting ticket, in hours. If not specified, or user specifies 0, then default value 10 will be used.
- truststore
Password StringUri - Optional. The Cloud Storage URI of a KMS encrypted file containing the password to the user provided truststore. For the self-signed certificate, this password is generated by Dataproc.
- truststore
Uri String - Optional. The Cloud Storage URI of the truststore file used for SSL encryption. If not provided, Dataproc will provide a self-signed certificate.
LifecycleConfig, LifecycleConfigArgs
- Auto
Delete stringTime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete stringTtl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Delete stringTtl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete stringTime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete stringTtl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Delete stringTtl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTtl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete StringTtl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete stringTime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete stringTtl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete stringTtl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_
delete_ strtime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_
delete_ strttl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_
delete_ strttl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTtl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete StringTtl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
LifecycleConfigResponse, LifecycleConfigResponseArgs
- Auto
Delete stringTime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete stringTtl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Delete stringTtl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Start stringTime - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete stringTime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Auto
Delete stringTtl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Delete stringTtl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- Idle
Start stringTime - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTtl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete StringTtl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Start StringTime - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete stringTime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete stringTtl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete stringTtl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Start stringTime - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_
delete_ strtime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto_
delete_ strttl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_
delete_ strttl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle_
start_ strtime - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTime - Optional. The time when cluster will be auto-deleted. (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- auto
Delete StringTtl - Optional. The lifetime duration of cluster. The cluster will be auto-deleted at the end of this period. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Delete StringTtl - Optional. The duration to keep the cluster alive while idling (when no jobs are running). Passing this threshold will cause the cluster to be deleted. Minimum value is 5 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).
- idle
Start StringTime - The time when cluster became idle (most recent job finished) and became eligible for deletion due to idleness (see JSON representation of Timestamp (https://developers.google.com/protocol-buffers/docs/proto3#json)).
LoggingConfig, LoggingConfigArgs
- Driver
Log Dictionary<string, string>Levels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- Driver
Log map[string]stringLevels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver
Log Map<String,String>Levels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver
Log {[key: string]: string}Levels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver_
log_ Mapping[str, str]levels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver
Log Map<String>Levels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
LoggingConfigResponse, LoggingConfigResponseArgs
- Driver
Log Dictionary<string, string>Levels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- Driver
Log map[string]stringLevels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver
Log Map<String,String>Levels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver
Log {[key: string]: string}Levels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver_
log_ Mapping[str, str]levels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
- driver
Log Map<String>Levels - The per-package log levels for the driver. This may include "root" package name to configure rootLogger. Examples: 'com.google = FATAL', 'root = INFO', 'org.apache = DEBUG'
ManagedCluster, ManagedClusterArgs
- Cluster
Name string - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- Config
Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Cluster Config - The cluster configuration.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- Cluster
Name string - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- Config
Cluster
Config - The cluster configuration.
- Labels map[string]string
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster
Name String - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config
Cluster
Config - The cluster configuration.
- labels Map<String,String>
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster
Name string - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config
Cluster
Config - The cluster configuration.
- labels {[key: string]: string}
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster_
name str - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config
Cluster
Config - The cluster configuration.
- labels Mapping[str, str]
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster
Name String - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config Property Map
- The cluster configuration.
- labels Map<String>
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
ManagedClusterResponse, ManagedClusterResponseArgs
- Cluster
Name string - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- Config
Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Cluster Config Response - The cluster configuration.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- Cluster
Name string - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- Config
Cluster
Config Response - The cluster configuration.
- Labels map[string]string
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster
Name String - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config
Cluster
Config Response - The cluster configuration.
- labels Map<String,String>
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster
Name string - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config
Cluster
Config Response - The cluster configuration.
- labels {[key: string]: string}
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster_
name str - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config
Cluster
Config Response - The cluster configuration.
- labels Mapping[str, str]
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
- cluster
Name String - The cluster name prefix. A unique cluster name will be formed by appending a random suffix.The name must contain only lower-case letters (a-z), numbers (0-9), and hyphens (-). Must begin with a letter. Cannot begin or end with hyphen. Must consist of between 2 and 35 characters.
- config Property Map
- The cluster configuration.
- labels Map<String>
- Optional. The labels to associate with this cluster.Label keys must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following PCRE regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given cluster.
ManagedGroupConfigResponse, ManagedGroupConfigResponseArgs
- Instance
Group stringManager Name - The name of the Instance Group Manager for this group.
- Instance
Template stringName - The name of the Instance Template used for the Managed Instance Group.
- Instance
Group stringManager Name - The name of the Instance Group Manager for this group.
- Instance
Template stringName - The name of the Instance Template used for the Managed Instance Group.
- instance
Group StringManager Name - The name of the Instance Group Manager for this group.
- instance
Template StringName - The name of the Instance Template used for the Managed Instance Group.
- instance
Group stringManager Name - The name of the Instance Group Manager for this group.
- instance
Template stringName - The name of the Instance Template used for the Managed Instance Group.
- instance_
group_ strmanager_ name - The name of the Instance Group Manager for this group.
- instance_
template_ strname - The name of the Instance Template used for the Managed Instance Group.
- instance
Group StringManager Name - The name of the Instance Group Manager for this group.
- instance
Template StringName - The name of the Instance Template used for the Managed Instance Group.
MetastoreConfig, MetastoreConfigArgs
- Dataproc
Metastore stringService - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- Dataproc
Metastore stringService - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore StringService - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore stringService - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc_
metastore_ strservice - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore StringService - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
MetastoreConfigResponse, MetastoreConfigResponseArgs
- Dataproc
Metastore stringService - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- Dataproc
Metastore stringService - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore StringService - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore stringService - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc_
metastore_ strservice - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
- dataproc
Metastore StringService - Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[dataproc_region]/services/[service-name]
NamespacedGkeDeploymentTarget, NamespacedGkeDeploymentTargetArgs
- Cluster
Namespace string - Optional. A namespace within the GKE cluster to deploy into.
- Target
Gke stringCluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Cluster
Namespace string - Optional. A namespace within the GKE cluster to deploy into.
- Target
Gke stringCluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace String - Optional. A namespace within the GKE cluster to deploy into.
- target
Gke StringCluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace string - Optional. A namespace within the GKE cluster to deploy into.
- target
Gke stringCluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster_
namespace str - Optional. A namespace within the GKE cluster to deploy into.
- target_
gke_ strcluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace String - Optional. A namespace within the GKE cluster to deploy into.
- target
Gke StringCluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
NamespacedGkeDeploymentTargetResponse, NamespacedGkeDeploymentTargetResponseArgs
- Cluster
Namespace string - Optional. A namespace within the GKE cluster to deploy into.
- Target
Gke stringCluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- Cluster
Namespace string - Optional. A namespace within the GKE cluster to deploy into.
- Target
Gke stringCluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace String - Optional. A namespace within the GKE cluster to deploy into.
- target
Gke StringCluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace string - Optional. A namespace within the GKE cluster to deploy into.
- target
Gke stringCluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster_
namespace str - Optional. A namespace within the GKE cluster to deploy into.
- target_
gke_ strcluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
- cluster
Namespace String - Optional. A namespace within the GKE cluster to deploy into.
- target
Gke StringCluster - Optional. The target GKE cluster to deploy to. Format: 'projects/{project}/locations/{location}/clusters/{cluster_id}'
NodeGroupAffinity, NodeGroupAffinityArgs
- Node
Group stringUri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- Node
Group stringUri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node
Group StringUri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node
Group stringUri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node_
group_ struri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node
Group StringUri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
NodeGroupAffinityResponse, NodeGroupAffinityResponseArgs
- Node
Group stringUri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- Node
Group stringUri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node
Group StringUri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node
Group stringUri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node_
group_ struri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
- node
Group StringUri - The URI of a sole-tenant node group resource (https://cloud.google.com/compute/docs/reference/rest/v1/nodeGroups) that the cluster will be created on.A full URL, partial URI, or node group name are valid. Examples: https://www.googleapis.com/compute/v1/projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 projects/[project_id]/zones/us-central1-a/nodeGroups/node-group-1 node-group-1
NodeInitializationAction, NodeInitializationActionArgs
- Executable
File string - Cloud Storage URI of executable file.
- Execution
Timeout string - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- Executable
File string - Cloud Storage URI of executable file.
- Execution
Timeout string - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File String - Cloud Storage URI of executable file.
- execution
Timeout String - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File string - Cloud Storage URI of executable file.
- execution
Timeout string - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable_
file str - Cloud Storage URI of executable file.
- execution_
timeout str - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File String - Cloud Storage URI of executable file.
- execution
Timeout String - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
NodeInitializationActionResponse, NodeInitializationActionResponseArgs
- Executable
File string - Cloud Storage URI of executable file.
- Execution
Timeout string - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- Executable
File string - Cloud Storage URI of executable file.
- Execution
Timeout string - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File String - Cloud Storage URI of executable file.
- execution
Timeout String - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File string - Cloud Storage URI of executable file.
- execution
Timeout string - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable_
file str - Cloud Storage URI of executable file.
- execution_
timeout str - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
- executable
File String - Cloud Storage URI of executable file.
- execution
Timeout String - Optional. Amount of time executable has to complete. Default is 10 minutes (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)).Cluster creation fails with an explanatory error message (the name of the executable that caused the error and the exceeded timeout period) if the executable is not completed at end of the timeout period.
OrderedJob, OrderedJobArgs
- Step
Id string - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- Hadoop
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Hadoop Job - Optional. Job is a Hadoop job.
- Hive
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Hive Job - Optional. Job is a Hive job.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- Pig
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Pig Job - Optional. Job is a Pig job.
- Prerequisite
Step List<string>Ids - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- Presto
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Presto Job - Optional. Job is a Presto job.
- Pyspark
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Py Spark Job - Optional. Job is a PySpark job.
- Scheduling
Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Job Scheduling - Optional. Job scheduling configuration.
- Spark
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Spark Job - Optional. Job is a Spark job.
- Spark
RJob Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Spark RJob - Optional. Job is a SparkR job.
- Spark
Sql Pulumi.Job Google Native. Dataproc. V1Beta2. Inputs. Spark Sql Job - Optional. Job is a SparkSql job.
- Step
Id string - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- Hadoop
Job HadoopJob - Optional. Job is a Hadoop job.
- Hive
Job HiveJob - Optional. Job is a Hive job.
- Labels map[string]string
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- Pig
Job PigJob - Optional. Job is a Pig job.
- Prerequisite
Step []stringIds - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- Presto
Job PrestoJob - Optional. Job is a Presto job.
- Pyspark
Job PySpark Job - Optional. Job is a PySpark job.
- Scheduling
Job
Scheduling - Optional. Job scheduling configuration.
- Spark
Job SparkJob - Optional. Job is a Spark job.
- Spark
RJob SparkRJob - Optional. Job is a SparkR job.
- Spark
Sql SparkJob Sql Job - Optional. Job is a SparkSql job.
- step
Id String - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop
Job HadoopJob - Optional. Job is a Hadoop job.
- hive
Job HiveJob - Optional. Job is a Hive job.
- labels Map<String,String>
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig
Job PigJob - Optional. Job is a Pig job.
- prerequisite
Step List<String>Ids - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto
Job PrestoJob - Optional. Job is a Presto job.
- pyspark
Job PySpark Job - Optional. Job is a PySpark job.
- scheduling
Job
Scheduling - Optional. Job scheduling configuration.
- spark
Job SparkJob - Optional. Job is a Spark job.
- spark
RJob SparkRJob - Optional. Job is a SparkR job.
- spark
Sql SparkJob Sql Job - Optional. Job is a SparkSql job.
- step
Id string - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop
Job HadoopJob - Optional. Job is a Hadoop job.
- hive
Job HiveJob - Optional. Job is a Hive job.
- labels {[key: string]: string}
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig
Job PigJob - Optional. Job is a Pig job.
- prerequisite
Step string[]Ids - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto
Job PrestoJob - Optional. Job is a Presto job.
- pyspark
Job PySpark Job - Optional. Job is a PySpark job.
- scheduling
Job
Scheduling - Optional. Job scheduling configuration.
- spark
Job SparkJob - Optional. Job is a Spark job.
- spark
RJob SparkRJob - Optional. Job is a SparkR job.
- spark
Sql SparkJob Sql Job - Optional. Job is a SparkSql job.
- step_
id str - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop_
job HadoopJob - Optional. Job is a Hadoop job.
- hive_
job HiveJob - Optional. Job is a Hive job.
- labels Mapping[str, str]
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig_
job PigJob - Optional. Job is a Pig job.
- prerequisite_
step_ Sequence[str]ids - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto_
job PrestoJob - Optional. Job is a Presto job.
- pyspark_
job PySpark Job - Optional. Job is a PySpark job.
- scheduling
Job
Scheduling - Optional. Job scheduling configuration.
- spark_
job SparkJob - Optional. Job is a Spark job.
- spark_
r_ Sparkjob RJob - Optional. Job is a SparkR job.
- spark_
sql_ Sparkjob Sql Job - Optional. Job is a SparkSql job.
- step
Id String - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop
Job Property Map - Optional. Job is a Hadoop job.
- hive
Job Property Map - Optional. Job is a Hive job.
- labels Map<String>
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig
Job Property Map - Optional. Job is a Pig job.
- prerequisite
Step List<String>Ids - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto
Job Property Map - Optional. Job is a Presto job.
- pyspark
Job Property Map - Optional. Job is a PySpark job.
- scheduling Property Map
- Optional. Job scheduling configuration.
- spark
Job Property Map - Optional. Job is a Spark job.
- spark
RJob Property Map - Optional. Job is a SparkR job.
- spark
Sql Property MapJob - Optional. Job is a SparkSql job.
OrderedJobResponse, OrderedJobResponseArgs
- Hadoop
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Hadoop Job Response - Optional. Job is a Hadoop job.
- Hive
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Hive Job Response - Optional. Job is a Hive job.
- Labels Dictionary<string, string>
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- Pig
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Pig Job Response - Optional. Job is a Pig job.
- Prerequisite
Step List<string>Ids - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- Presto
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Presto Job Response - Optional. Job is a Presto job.
- Pyspark
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Py Spark Job Response - Optional. Job is a PySpark job.
- Scheduling
Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Job Scheduling Response - Optional. Job scheduling configuration.
- Spark
Job Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Spark Job Response - Optional. Job is a Spark job.
- Spark
RJob Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Spark RJob Response - Optional. Job is a SparkR job.
- Spark
Sql Pulumi.Job Google Native. Dataproc. V1Beta2. Inputs. Spark Sql Job Response - Optional. Job is a SparkSql job.
- Step
Id string - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- Hadoop
Job HadoopJob Response - Optional. Job is a Hadoop job.
- Hive
Job HiveJob Response - Optional. Job is a Hive job.
- Labels map[string]string
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- Pig
Job PigJob Response - Optional. Job is a Pig job.
- Prerequisite
Step []stringIds - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- Presto
Job PrestoJob Response - Optional. Job is a Presto job.
- Pyspark
Job PySpark Job Response - Optional. Job is a PySpark job.
- Scheduling
Job
Scheduling Response - Optional. Job scheduling configuration.
- Spark
Job SparkJob Response - Optional. Job is a Spark job.
- Spark
RJob SparkRJob Response - Optional. Job is a SparkR job.
- Spark
Sql SparkJob Sql Job Response - Optional. Job is a SparkSql job.
- Step
Id string - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop
Job HadoopJob Response - Optional. Job is a Hadoop job.
- hive
Job HiveJob Response - Optional. Job is a Hive job.
- labels Map<String,String>
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig
Job PigJob Response - Optional. Job is a Pig job.
- prerequisite
Step List<String>Ids - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto
Job PrestoJob Response - Optional. Job is a Presto job.
- pyspark
Job PySpark Job Response - Optional. Job is a PySpark job.
- scheduling
Job
Scheduling Response - Optional. Job scheduling configuration.
- spark
Job SparkJob Response - Optional. Job is a Spark job.
- spark
RJob SparkRJob Response - Optional. Job is a SparkR job.
- spark
Sql SparkJob Sql Job Response - Optional. Job is a SparkSql job.
- step
Id String - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop
Job HadoopJob Response - Optional. Job is a Hadoop job.
- hive
Job HiveJob Response - Optional. Job is a Hive job.
- labels {[key: string]: string}
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig
Job PigJob Response - Optional. Job is a Pig job.
- prerequisite
Step string[]Ids - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto
Job PrestoJob Response - Optional. Job is a Presto job.
- pyspark
Job PySpark Job Response - Optional. Job is a PySpark job.
- scheduling
Job
Scheduling Response - Optional. Job scheduling configuration.
- spark
Job SparkJob Response - Optional. Job is a Spark job.
- spark
RJob SparkRJob Response - Optional. Job is a SparkR job.
- spark
Sql SparkJob Sql Job Response - Optional. Job is a SparkSql job.
- step
Id string - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop_
job HadoopJob Response - Optional. Job is a Hadoop job.
- hive_
job HiveJob Response - Optional. Job is a Hive job.
- labels Mapping[str, str]
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig_
job PigJob Response - Optional. Job is a Pig job.
- prerequisite_
step_ Sequence[str]ids - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto_
job PrestoJob Response - Optional. Job is a Presto job.
- pyspark_
job PySpark Job Response - Optional. Job is a PySpark job.
- scheduling
Job
Scheduling Response - Optional. Job scheduling configuration.
- spark_
job SparkJob Response - Optional. Job is a Spark job.
- spark_
r_ Sparkjob RJob Response - Optional. Job is a SparkR job.
- spark_
sql_ Sparkjob Sql Job Response - Optional. Job is a SparkSql job.
- step_
id str - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
- hadoop
Job Property Map - Optional. Job is a Hadoop job.
- hive
Job Property Map - Optional. Job is a Hive job.
- labels Map<String>
- Optional. The labels to associate with this job.Label keys must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}{0,62}Label values must be between 1 and 63 characters long, and must conform to the following regular expression: \p{Ll}\p{Lo}\p{N}_-{0,63}No more than 32 labels can be associated with a given job.
- pig
Job Property Map - Optional. Job is a Pig job.
- prerequisite
Step List<String>Ids - Optional. The optional list of prerequisite job step_ids. If not specified, the job will start at the beginning of workflow.
- presto
Job Property Map - Optional. Job is a Presto job.
- pyspark
Job Property Map - Optional. Job is a PySpark job.
- scheduling Property Map
- Optional. Job scheduling configuration.
- spark
Job Property Map - Optional. Job is a Spark job.
- spark
RJob Property Map - Optional. Job is a SparkR job.
- spark
Sql Property MapJob - Optional. Job is a SparkSql job.
- step
Id String - The step id. The id must be unique among all jobs within the template.The step id is used as prefix for job id, as job goog-dataproc-workflow-step-id label, and in prerequisiteStepIds field from other steps.The id must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). Cannot begin or end with underscore or hyphen. Must consist of between 3 and 50 characters.
ParameterValidation, ParameterValidationArgs
- Regex
Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Regex Validation - Validation based on regular expressions.
- Values
Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Value Validation - Validation based on a list of allowed values.
- Regex
Regex
Validation - Validation based on regular expressions.
- Values
Value
Validation - Validation based on a list of allowed values.
- regex
Regex
Validation - Validation based on regular expressions.
- values
Value
Validation - Validation based on a list of allowed values.
- regex
Regex
Validation - Validation based on regular expressions.
- values
Value
Validation - Validation based on a list of allowed values.
- regex
Regex
Validation - Validation based on regular expressions.
- values
Value
Validation - Validation based on a list of allowed values.
- regex Property Map
- Validation based on regular expressions.
- values Property Map
- Validation based on a list of allowed values.
ParameterValidationResponse, ParameterValidationResponseArgs
- Regex
Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Regex Validation Response - Validation based on regular expressions.
- Values
Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Value Validation Response - Validation based on a list of allowed values.
- Regex
Regex
Validation Response - Validation based on regular expressions.
- Values
Value
Validation Response - Validation based on a list of allowed values.
- regex
Regex
Validation Response - Validation based on regular expressions.
- values
Value
Validation Response - Validation based on a list of allowed values.
- regex
Regex
Validation Response - Validation based on regular expressions.
- values
Value
Validation Response - Validation based on a list of allowed values.
- regex
Regex
Validation Response - Validation based on regular expressions.
- values
Value
Validation Response - Validation based on a list of allowed values.
- regex Property Map
- Validation based on regular expressions.
- values Property Map
- Validation based on a list of allowed values.
PigJob, PigJobArgs
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config - Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- Query
File stringUri - The HCFS URI of the script that contains the Pig queries.
- Query
List Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List - A list of queries.
- Script
Variables Dictionary<string, string> - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File []stringUris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- Logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- Query
File stringUri - The HCFS URI of the script that contains the Pig queries.
- Query
List QueryList - A list of queries.
- Script
Variables map[string]string - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query
File StringUri - The HCFS URI of the script that contains the Pig queries.
- query
List QueryList - A list of queries.
- script
Variables Map<String,String> - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue
On booleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File string[]Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query
File stringUri - The HCFS URI of the script that contains the Pig queries.
- query
List QueryList - A list of queries.
- script
Variables {[key: string]: string} - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue_
on_ boolfailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging_
config LoggingConfig - Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query_
file_ struri - The HCFS URI of the script that contains the Pig queries.
- query_
list QueryList - A list of queries.
- script_
variables Mapping[str, str] - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query
File StringUri - The HCFS URI of the script that contains the Pig queries.
- query
List Property Map - A list of queries.
- script
Variables Map<String> - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
PigJobResponse, PigJobResponseArgs
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- Query
File stringUri - The HCFS URI of the script that contains the Pig queries.
- Query
List Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response - A list of queries.
- Script
Variables Dictionary<string, string> - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Jar
File []stringUris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- Logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- Query
File stringUri - The HCFS URI of the script that contains the Pig queries.
- Query
List QueryList Response - A list of queries.
- Script
Variables map[string]string - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query
File StringUri - The HCFS URI of the script that contains the Pig queries.
- query
List QueryList Response - A list of queries.
- script
Variables Map<String,String> - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue
On booleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File string[]Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query
File stringUri - The HCFS URI of the script that contains the Pig queries.
- query
List QueryList Response - A list of queries.
- script
Variables {[key: string]: string} - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue_
on_ boolfailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging_
config LoggingConfig Response - Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query_
file_ struri - The HCFS URI of the script that contains the Pig queries.
- query_
list QueryList Response - A list of queries.
- script_
variables Mapping[str, str] - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Pig. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/hadoop/conf/*-site.xml, /etc/pig/conf/pig.properties, and classes in user code.
- query
File StringUri - The HCFS URI of the script that contains the Pig queries.
- query
List Property Map - A list of queries.
- script
Variables Map<String> - Optional. Mapping of query variable names to values (equivalent to the Pig command: name=[value]).
PrestoJob, PrestoJobArgs
- List<string>
- Optional. Presto client tags to attach to this query
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config - Optional. The runtime log config for job execution.
- Output
Format string - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- Query
File stringUri - The HCFS URI of the script that contains SQL queries.
- Query
List Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List - A list of queries.
- []string
- Optional. Presto client tags to attach to this query
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- Output
Format string - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- Properties map[string]string
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- Query
File stringUri - The HCFS URI of the script that contains SQL queries.
- Query
List QueryList - A list of queries.
- List<String>
- Optional. Presto client tags to attach to this query
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- output
Format String - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Map<String,String>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query
File StringUri - The HCFS URI of the script that contains SQL queries.
- query
List QueryList - A list of queries.
- string[]
- Optional. Presto client tags to attach to this query
- continue
On booleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- output
Format string - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties {[key: string]: string}
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query
File stringUri - The HCFS URI of the script that contains SQL queries.
- query
List QueryList - A list of queries.
- Sequence[str]
- Optional. Presto client tags to attach to this query
- continue_
on_ boolfailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging_
config LoggingConfig - Optional. The runtime log config for job execution.
- output_
format str - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Mapping[str, str]
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query_
file_ struri - The HCFS URI of the script that contains SQL queries.
- query_
list QueryList - A list of queries.
- List<String>
- Optional. Presto client tags to attach to this query
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- output
Format String - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Map<String>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query
File StringUri - The HCFS URI of the script that contains SQL queries.
- query
List Property Map - A list of queries.
PrestoJobResponse, PrestoJobResponseArgs
- List<string>
- Optional. Presto client tags to attach to this query
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Output
Format string - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- Query
File stringUri - The HCFS URI of the script that contains SQL queries.
- Query
List Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response - A list of queries.
- []string
- Optional. Presto client tags to attach to this query
- Continue
On boolFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- Logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- Output
Format string - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- Properties map[string]string
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- Query
File stringUri - The HCFS URI of the script that contains SQL queries.
- Query
List QueryList Response - A list of queries.
- List<String>
- Optional. Presto client tags to attach to this query
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- output
Format String - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Map<String,String>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query
File StringUri - The HCFS URI of the script that contains SQL queries.
- query
List QueryList Response - A list of queries.
- string[]
- Optional. Presto client tags to attach to this query
- continue
On booleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- output
Format string - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties {[key: string]: string}
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query
File stringUri - The HCFS URI of the script that contains SQL queries.
- query
List QueryList Response - A list of queries.
- Sequence[str]
- Optional. Presto client tags to attach to this query
- continue_
on_ boolfailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging_
config LoggingConfig Response - Optional. The runtime log config for job execution.
- output_
format str - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Mapping[str, str]
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query_
file_ struri - The HCFS URI of the script that contains SQL queries.
- query_
list QueryList Response - A list of queries.
- List<String>
- Optional. Presto client tags to attach to this query
- continue
On BooleanFailure - Optional. Whether to continue executing queries if a query fails. The default value is false. Setting to true can be useful when executing independent parallel queries.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- output
Format String - Optional. The format in which query output will be displayed. See the Presto documentation for supported output formats
- properties Map<String>
- Optional. A mapping of property names to values. Used to set Presto session properties (https://prestodb.io/docs/current/sql/set-session.html) Equivalent to using the --session flag in the Presto CLI
- query
File StringUri - The HCFS URI of the script that contains SQL queries.
- query
List Property Map - A list of queries.
PySparkJob, PySparkJobArgs
- Main
Python stringFile Uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- Archive
Uris List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config - Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Python
File List<string>Uris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- Main
Python stringFile Uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- Archive
Uris []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris []string - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File []stringUris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- Logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Python
File []stringUris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- main
Python StringFile Uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python
File List<String>Uris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- main
Python stringFile Uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- archive
Uris string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File string[]Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python
File string[]Uris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- main_
python_ strfile_ uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- archive_
uris Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging_
config LoggingConfig - Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python_
file_ Sequence[str]uris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- main
Python StringFile Uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python
File List<String>Uris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
PySparkJobResponse, PySparkJobResponseArgs
- Archive
Uris List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Main
Python stringFile Uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Python
File List<string>Uris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- Archive
Uris []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris []string - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File []stringUris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- Logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- Main
Python stringFile Uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Python
File []stringUris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Python StringFile Uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python
File List<String>Uris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive
Uris string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File string[]Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Python stringFile Uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python
File string[]Uris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive_
uris Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging_
config LoggingConfig Response - Optional. The runtime log config for job execution.
- main_
python_ strfile_ uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python_
file_ Sequence[str]uris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- main
Python StringFile Uri - The HCFS URI of the main Python file to use as the driver. Must be a .py file.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure PySpark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- python
File List<String>Uris - Optional. HCFS file URIs of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip.
QueryList, QueryListArgs
- Queries List<string>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- Queries []string
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries List<String>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries string[]
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries Sequence[str]
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries List<String>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
QueryListResponse, QueryListResponseArgs
- Queries List<string>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- Queries []string
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries List<String>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries string[]
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries Sequence[str]
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
- queries List<String>
- The queries to execute. You do not need to end a query expression with a semicolon. Multiple queries can be specified in one string by separating each with a semicolon. Here is an example of a Dataproc API snippet that uses a QueryList to specify a HiveJob: "hiveJob": { "queryList": { "queries": [ "query1", "query2", "query3;query4", ] } }
RegexValidation, RegexValidationArgs
- Regexes List<string>
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- Regexes []string
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes List<String>
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes string[]
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes Sequence[str]
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes List<String>
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
RegexValidationResponse, RegexValidationResponseArgs
- Regexes List<string>
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- Regexes []string
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes List<String>
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes string[]
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes Sequence[str]
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
- regexes List<String>
- RE2 regular expressions used to validate the parameter's value. The value must match the regex in its entirety (substring matches are not sufficient).
ReservationAffinity, ReservationAffinityArgs
- Consume
Reservation Pulumi.Type Google Native. Dataproc. V1Beta2. Reservation Affinity Consume Reservation Type - Optional. Type of reservation to consume
- Key string
- Optional. Corresponds to the label key of reservation resource.
- Values List<string>
- Optional. Corresponds to the label values of reservation resource.
- Consume
Reservation ReservationType Affinity Consume Reservation Type - Optional. Type of reservation to consume
- Key string
- Optional. Corresponds to the label key of reservation resource.
- Values []string
- Optional. Corresponds to the label values of reservation resource.
- consume
Reservation ReservationType Affinity Consume Reservation Type - Optional. Type of reservation to consume
- key String
- Optional. Corresponds to the label key of reservation resource.
- values List<String>
- Optional. Corresponds to the label values of reservation resource.
- consume
Reservation ReservationType Affinity Consume Reservation Type - Optional. Type of reservation to consume
- key string
- Optional. Corresponds to the label key of reservation resource.
- values string[]
- Optional. Corresponds to the label values of reservation resource.
- consume_
reservation_ Reservationtype Affinity Consume Reservation Type - Optional. Type of reservation to consume
- key str
- Optional. Corresponds to the label key of reservation resource.
- values Sequence[str]
- Optional. Corresponds to the label values of reservation resource.
- consume
Reservation "TYPE_UNSPECIFIED" | "NO_RESERVATION" | "ANY_RESERVATION" | "SPECIFIC_RESERVATION"Type - Optional. Type of reservation to consume
- key String
- Optional. Corresponds to the label key of reservation resource.
- values List<String>
- Optional. Corresponds to the label values of reservation resource.
ReservationAffinityConsumeReservationType, ReservationAffinityConsumeReservationTypeArgs
- Type
Unspecified - TYPE_UNSPECIFIED
- No
Reservation - NO_RESERVATIONDo not consume from any allocated capacity.
- Any
Reservation - ANY_RESERVATIONConsume any reservation available.
- Specific
Reservation - SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
- Reservation
Affinity Consume Reservation Type Type Unspecified - TYPE_UNSPECIFIED
- Reservation
Affinity Consume Reservation Type No Reservation - NO_RESERVATIONDo not consume from any allocated capacity.
- Reservation
Affinity Consume Reservation Type Any Reservation - ANY_RESERVATIONConsume any reservation available.
- Reservation
Affinity Consume Reservation Type Specific Reservation - SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
- Type
Unspecified - TYPE_UNSPECIFIED
- No
Reservation - NO_RESERVATIONDo not consume from any allocated capacity.
- Any
Reservation - ANY_RESERVATIONConsume any reservation available.
- Specific
Reservation - SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
- Type
Unspecified - TYPE_UNSPECIFIED
- No
Reservation - NO_RESERVATIONDo not consume from any allocated capacity.
- Any
Reservation - ANY_RESERVATIONConsume any reservation available.
- Specific
Reservation - SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
- TYPE_UNSPECIFIED
- TYPE_UNSPECIFIED
- NO_RESERVATION
- NO_RESERVATIONDo not consume from any allocated capacity.
- ANY_RESERVATION
- ANY_RESERVATIONConsume any reservation available.
- SPECIFIC_RESERVATION
- SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
- "TYPE_UNSPECIFIED"
- TYPE_UNSPECIFIED
- "NO_RESERVATION"
- NO_RESERVATIONDo not consume from any allocated capacity.
- "ANY_RESERVATION"
- ANY_RESERVATIONConsume any reservation available.
- "SPECIFIC_RESERVATION"
- SPECIFIC_RESERVATIONMust consume from a specific reservation. Must specify key value fields for specifying the reservations.
ReservationAffinityResponse, ReservationAffinityResponseArgs
- Consume
Reservation stringType - Optional. Type of reservation to consume
- Key string
- Optional. Corresponds to the label key of reservation resource.
- Values List<string>
- Optional. Corresponds to the label values of reservation resource.
- Consume
Reservation stringType - Optional. Type of reservation to consume
- Key string
- Optional. Corresponds to the label key of reservation resource.
- Values []string
- Optional. Corresponds to the label values of reservation resource.
- consume
Reservation StringType - Optional. Type of reservation to consume
- key String
- Optional. Corresponds to the label key of reservation resource.
- values List<String>
- Optional. Corresponds to the label values of reservation resource.
- consume
Reservation stringType - Optional. Type of reservation to consume
- key string
- Optional. Corresponds to the label key of reservation resource.
- values string[]
- Optional. Corresponds to the label values of reservation resource.
- consume_
reservation_ strtype - Optional. Type of reservation to consume
- key str
- Optional. Corresponds to the label key of reservation resource.
- values Sequence[str]
- Optional. Corresponds to the label values of reservation resource.
- consume
Reservation StringType - Optional. Type of reservation to consume
- key String
- Optional. Corresponds to the label key of reservation resource.
- values List<String>
- Optional. Corresponds to the label values of reservation resource.
SecurityConfig, SecurityConfigArgs
- Kerberos
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Kerberos Config - Optional. Kerberos related configuration.
- Kerberos
Config KerberosConfig - Optional. Kerberos related configuration.
- kerberos
Config KerberosConfig - Optional. Kerberos related configuration.
- kerberos
Config KerberosConfig - Optional. Kerberos related configuration.
- kerberos_
config KerberosConfig - Optional. Kerberos related configuration.
- kerberos
Config Property Map - Optional. Kerberos related configuration.
SecurityConfigResponse, SecurityConfigResponseArgs
- Kerberos
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Kerberos Config Response - Optional. Kerberos related configuration.
- Kerberos
Config KerberosConfig Response - Optional. Kerberos related configuration.
- kerberos
Config KerberosConfig Response - Optional. Kerberos related configuration.
- kerberos
Config KerberosConfig Response - Optional. Kerberos related configuration.
- kerberos_
config KerberosConfig Response - Optional. Kerberos related configuration.
- kerberos
Config Property Map - Optional. Kerberos related configuration.
ShieldedInstanceConfig, ShieldedInstanceConfigArgs
- Enable
Integrity boolMonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- Enable
Secure boolBoot - Optional. Defines whether instances have Secure Boot enabled.
- Enable
Vtpm bool - Optional. Defines whether instances have the vTPM enabled.
- Enable
Integrity boolMonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- Enable
Secure boolBoot - Optional. Defines whether instances have Secure Boot enabled.
- Enable
Vtpm bool - Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity BooleanMonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure BooleanBoot - Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm Boolean - Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity booleanMonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure booleanBoot - Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm boolean - Optional. Defines whether instances have the vTPM enabled.
- enable_
integrity_ boolmonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- enable_
secure_ boolboot - Optional. Defines whether instances have Secure Boot enabled.
- enable_
vtpm bool - Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity BooleanMonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure BooleanBoot - Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm Boolean - Optional. Defines whether instances have the vTPM enabled.
ShieldedInstanceConfigResponse, ShieldedInstanceConfigResponseArgs
- Enable
Integrity boolMonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- Enable
Secure boolBoot - Optional. Defines whether instances have Secure Boot enabled.
- Enable
Vtpm bool - Optional. Defines whether instances have the vTPM enabled.
- Enable
Integrity boolMonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- Enable
Secure boolBoot - Optional. Defines whether instances have Secure Boot enabled.
- Enable
Vtpm bool - Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity BooleanMonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure BooleanBoot - Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm Boolean - Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity booleanMonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure booleanBoot - Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm boolean - Optional. Defines whether instances have the vTPM enabled.
- enable_
integrity_ boolmonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- enable_
secure_ boolboot - Optional. Defines whether instances have Secure Boot enabled.
- enable_
vtpm bool - Optional. Defines whether instances have the vTPM enabled.
- enable
Integrity BooleanMonitoring - Optional. Defines whether instances have integrity monitoring enabled.
- enable
Secure BooleanBoot - Optional. Defines whether instances have Secure Boot enabled.
- enable
Vtpm Boolean - Optional. Defines whether instances have the vTPM enabled.
SoftwareConfig, SoftwareConfigArgs
- Image
Version string - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- Optional
Components List<Pulumi.Google Native. Dataproc. V1Beta2. Software Config Optional Components Item> - The set of optional components to activate on the cluster.
- Properties Dictionary<string, string>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- Image
Version string - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- Optional
Components []SoftwareConfig Optional Components Item - The set of optional components to activate on the cluster.
- Properties map[string]string
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version String - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components List<SoftwareConfig Optional Components Item> - The set of optional components to activate on the cluster.
- properties Map<String,String>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version string - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components SoftwareConfig Optional Components Item[] - The set of optional components to activate on the cluster.
- properties {[key: string]: string}
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image_
version str - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional_
components Sequence[SoftwareConfig Optional Components Item] - The set of optional components to activate on the cluster.
- properties Mapping[str, str]
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version String - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components List<"COMPONENT_UNSPECIFIED" | "ANACONDA" | "DOCKER" | "DRUID" | "FLINK" | "HBASE" | "HIVE_WEBHCAT" | "JUPYTER" | "KERBEROS" | "PRESTO" | "RANGER" | "SOLR" | "ZEPPELIN" | "ZOOKEEPER"> - The set of optional components to activate on the cluster.
- properties Map<String>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
SoftwareConfigOptionalComponentsItem, SoftwareConfigOptionalComponentsItemArgs
- Component
Unspecified - COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- Anaconda
- ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- Docker
- DOCKERDocker
- Druid
- DRUIDThe Druid query engine.
- Flink
- FLINKFlink
- Hbase
- HBASEHBase.
- Hive
Webhcat - HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- Jupyter
- JUPYTERThe Jupyter Notebook.
- Kerberos
- KERBEROSThe Kerberos security feature.
- Presto
- PRESTOThe Presto query engine.
- Ranger
- RANGERThe Ranger service.
- Solr
- SOLRThe Solr service.
- Zeppelin
- ZEPPELINThe Zeppelin notebook.
- Zookeeper
- ZOOKEEPERThe Zookeeper service.
- Software
Config Optional Components Item Component Unspecified - COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- Software
Config Optional Components Item Anaconda - ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- Software
Config Optional Components Item Docker - DOCKERDocker
- Software
Config Optional Components Item Druid - DRUIDThe Druid query engine.
- Software
Config Optional Components Item Flink - FLINKFlink
- Software
Config Optional Components Item Hbase - HBASEHBase.
- Software
Config Optional Components Item Hive Webhcat - HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- Software
Config Optional Components Item Jupyter - JUPYTERThe Jupyter Notebook.
- Software
Config Optional Components Item Kerberos - KERBEROSThe Kerberos security feature.
- Software
Config Optional Components Item Presto - PRESTOThe Presto query engine.
- Software
Config Optional Components Item Ranger - RANGERThe Ranger service.
- Software
Config Optional Components Item Solr - SOLRThe Solr service.
- Software
Config Optional Components Item Zeppelin - ZEPPELINThe Zeppelin notebook.
- Software
Config Optional Components Item Zookeeper - ZOOKEEPERThe Zookeeper service.
- Component
Unspecified - COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- Anaconda
- ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- Docker
- DOCKERDocker
- Druid
- DRUIDThe Druid query engine.
- Flink
- FLINKFlink
- Hbase
- HBASEHBase.
- Hive
Webhcat - HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- Jupyter
- JUPYTERThe Jupyter Notebook.
- Kerberos
- KERBEROSThe Kerberos security feature.
- Presto
- PRESTOThe Presto query engine.
- Ranger
- RANGERThe Ranger service.
- Solr
- SOLRThe Solr service.
- Zeppelin
- ZEPPELINThe Zeppelin notebook.
- Zookeeper
- ZOOKEEPERThe Zookeeper service.
- Component
Unspecified - COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- Anaconda
- ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- Docker
- DOCKERDocker
- Druid
- DRUIDThe Druid query engine.
- Flink
- FLINKFlink
- Hbase
- HBASEHBase.
- Hive
Webhcat - HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- Jupyter
- JUPYTERThe Jupyter Notebook.
- Kerberos
- KERBEROSThe Kerberos security feature.
- Presto
- PRESTOThe Presto query engine.
- Ranger
- RANGERThe Ranger service.
- Solr
- SOLRThe Solr service.
- Zeppelin
- ZEPPELINThe Zeppelin notebook.
- Zookeeper
- ZOOKEEPERThe Zookeeper service.
- COMPONENT_UNSPECIFIED
- COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- ANACONDA
- ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- DOCKER
- DOCKERDocker
- DRUID
- DRUIDThe Druid query engine.
- FLINK
- FLINKFlink
- HBASE
- HBASEHBase.
- HIVE_WEBHCAT
- HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- JUPYTER
- JUPYTERThe Jupyter Notebook.
- KERBEROS
- KERBEROSThe Kerberos security feature.
- PRESTO
- PRESTOThe Presto query engine.
- RANGER
- RANGERThe Ranger service.
- SOLR
- SOLRThe Solr service.
- ZEPPELIN
- ZEPPELINThe Zeppelin notebook.
- ZOOKEEPER
- ZOOKEEPERThe Zookeeper service.
- "COMPONENT_UNSPECIFIED"
- COMPONENT_UNSPECIFIEDUnspecified component. Specifying this will cause Cluster creation to fail.
- "ANACONDA"
- ANACONDAThe Anaconda python distribution. The Anaconda component is not supported in the Dataproc 2.0 image. The 2.0 image is pre-installed with Miniconda.
- "DOCKER"
- DOCKERDocker
- "DRUID"
- DRUIDThe Druid query engine.
- "FLINK"
- FLINKFlink
- "HBASE"
- HBASEHBase.
- "HIVE_WEBHCAT"
- HIVE_WEBHCATThe Hive Web HCatalog (the REST service for accessing HCatalog).
- "JUPYTER"
- JUPYTERThe Jupyter Notebook.
- "KERBEROS"
- KERBEROSThe Kerberos security feature.
- "PRESTO"
- PRESTOThe Presto query engine.
- "RANGER"
- RANGERThe Ranger service.
- "SOLR"
- SOLRThe Solr service.
- "ZEPPELIN"
- ZEPPELINThe Zeppelin notebook.
- "ZOOKEEPER"
- ZOOKEEPERThe Zookeeper service.
SoftwareConfigResponse, SoftwareConfigResponseArgs
- Image
Version string - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- Optional
Components List<string> - The set of optional components to activate on the cluster.
- Properties Dictionary<string, string>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- Image
Version string - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- Optional
Components []string - The set of optional components to activate on the cluster.
- Properties map[string]string
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version String - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components List<String> - The set of optional components to activate on the cluster.
- properties Map<String,String>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version string - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components string[] - The set of optional components to activate on the cluster.
- properties {[key: string]: string}
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image_
version str - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional_
components Sequence[str] - The set of optional components to activate on the cluster.
- properties Mapping[str, str]
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
- image
Version String - Optional. The version of software inside the cluster. It must be one of the supported Dataproc Versions (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#supported_dataproc_versions), such as "1.2" (including a subminor version, such as "1.2.29"), or the "preview" version (https://cloud.google.com/dataproc/docs/concepts/versioning/dataproc-versions#other_versions). If unspecified, it defaults to the latest Debian version.
- optional
Components List<String> - The set of optional components to activate on the cluster.
- properties Map<String>
- Optional. The properties to set on daemon config files.Property keys are specified in prefix:property format, for example core:hadoop.tmp.dir. The following are supported prefixes and their mappings: capacity-scheduler: capacity-scheduler.xml core: core-site.xml distcp: distcp-default.xml hdfs: hdfs-site.xml hive: hive-site.xml mapred: mapred-site.xml pig: pig.properties spark: spark-defaults.conf yarn: yarn-site.xmlFor more information, see Cluster properties (https://cloud.google.com/dataproc/docs/concepts/cluster-properties).
SparkJob, SparkJobArgs
- Archive
Uris List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Archive
Uris []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris []string - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File []stringUris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- Logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar StringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File string[]Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive_
uris Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging_
config LoggingConfig - Optional. The runtime log config for job execution.
- main_
class str - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main_
jar_ strfile_ uri - The HCFS URI of the jar file that contains the main class.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar StringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
SparkJobResponse, SparkJobResponseArgs
- Archive
Uris List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Archive
Uris []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris []string - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Jar
File []stringUris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- Logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- Main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- Main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar StringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File string[]Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- main
Class string - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar stringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive_
uris Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging_
config LoggingConfig Response - Optional. The runtime log config for job execution.
- main_
class str - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main_
jar_ strfile_ uri - The HCFS URI of the jar file that contains the main class.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to add to the CLASSPATHs of the Spark driver and tasks.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- main
Class String - The name of the driver's main class. The jar file that contains the class must be in the default CLASSPATH or specified in jar_file_uris.
- main
Jar StringFile Uri - The HCFS URI of the jar file that contains the main class.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Spark. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
SparkRJob, SparkRJobArgs
- Main
RFile stringUri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- Archive
Uris List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config - Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Main
RFile stringUri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- Archive
Uris []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris []string - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- main
RFile StringUri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- main
RFile stringUri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- archive
Uris string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- main_
r_ strfile_ uri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- archive_
uris Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging_
config LoggingConfig - Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- main
RFile StringUri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
SparkRJobResponse, SparkRJobResponseArgs
- Archive
Uris List<string> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args List<string>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris List<string> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Main
RFile stringUri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- Archive
Uris []string - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- Args []string
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- File
Uris []string - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- Logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- Main
RFile stringUri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- main
RFile StringUri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris string[] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args string[]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris string[] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- main
RFile stringUri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive_
uris Sequence[str] - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args Sequence[str]
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file_
uris Sequence[str] - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging_
config LoggingConfig Response - Optional. The runtime log config for job execution.
- main_
r_ strfile_ uri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
- archive
Uris List<String> - Optional. HCFS URIs of archives to be extracted into the working directory of each executor. Supported file types: .jar, .tar, .tar.gz, .tgz, and .zip.
- args List<String>
- Optional. The arguments to pass to the driver. Do not include arguments, such as --conf, that can be set as job properties, since a collision may occur that causes an incorrect job submission.
- file
Uris List<String> - Optional. HCFS URIs of files to be placed in the working directory of each executor. Useful for naively parallel tasks.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- main
RFile StringUri - The HCFS URI of the main R file to use as the driver. Must be a .R file.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure SparkR. Properties that conflict with values set by the Dataproc API may be overwritten. Can include properties set in /etc/spark/conf/spark-defaults.conf and classes in user code.
SparkSqlJob, SparkSqlJobArgs
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config - Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- Query
File stringUri - The HCFS URI of the script that contains SQL queries.
- Query
List Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List - A list of queries.
- Script
Variables Dictionary<string, string> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- Jar
File []stringUris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- Logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- Query
File stringUri - The HCFS URI of the script that contains SQL queries.
- Query
List QueryList - A list of queries.
- Script
Variables map[string]string - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query
File StringUri - The HCFS URI of the script that contains SQL queries.
- query
List QueryList - A list of queries.
- script
Variables Map<String,String> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File string[]Uris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging
Config LoggingConfig - Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query
File stringUri - The HCFS URI of the script that contains SQL queries.
- query
List QueryList - A list of queries.
- script
Variables {[key: string]: string} - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging_
config LoggingConfig - Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query_
file_ struri - The HCFS URI of the script that contains SQL queries.
- query_
list QueryList - A list of queries.
- script_
variables Mapping[str, str] - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query
File StringUri - The HCFS URI of the script that contains SQL queries.
- query
List Property Map - A list of queries.
- script
Variables Map<String> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
SparkSqlJobResponse, SparkSqlJobResponseArgs
- Jar
File List<string>Uris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- Logging
Config Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Logging Config Response - Optional. The runtime log config for job execution.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- Query
File stringUri - The HCFS URI of the script that contains SQL queries.
- Query
List Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Query List Response - A list of queries.
- Script
Variables Dictionary<string, string> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- Jar
File []stringUris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- Logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- Properties map[string]string
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- Query
File stringUri - The HCFS URI of the script that contains SQL queries.
- Query
List QueryList Response - A list of queries.
- Script
Variables map[string]string - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- properties Map<String,String>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query
File StringUri - The HCFS URI of the script that contains SQL queries.
- query
List QueryList Response - A list of queries.
- script
Variables Map<String,String> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File string[]Uris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging
Config LoggingConfig Response - Optional. The runtime log config for job execution.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query
File stringUri - The HCFS URI of the script that contains SQL queries.
- query
List QueryList Response - A list of queries.
- script
Variables {[key: string]: string} - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar_
file_ Sequence[str]uris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging_
config LoggingConfig Response - Optional. The runtime log config for job execution.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query_
file_ struri - The HCFS URI of the script that contains SQL queries.
- query_
list QueryList Response - A list of queries.
- script_
variables Mapping[str, str] - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
- jar
File List<String>Uris - Optional. HCFS URIs of jar files to be added to the Spark CLASSPATH.
- logging
Config Property Map - Optional. The runtime log config for job execution.
- properties Map<String>
- Optional. A mapping of property names to values, used to configure Spark SQL's SparkConf. Properties that conflict with values set by the Dataproc API may be overwritten.
- query
File StringUri - The HCFS URI of the script that contains SQL queries.
- query
List Property Map - A list of queries.
- script
Variables Map<String> - Optional. Mapping of query variable names to values (equivalent to the Spark SQL command: SET name="value";).
TemplateParameter, TemplateParameterArgs
- Fields List<string>
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- Name string
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- Description string
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- Validation
Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Parameter Validation - Optional. Validation rules to be applied to this parameter's value.
- Fields []string
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- Name string
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- Description string
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- Validation
Parameter
Validation - Optional. Validation rules to be applied to this parameter's value.
- fields List<String>
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name String
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- description String
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- validation
Parameter
Validation - Optional. Validation rules to be applied to this parameter's value.
- fields string[]
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name string
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- description string
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- validation
Parameter
Validation - Optional. Validation rules to be applied to this parameter's value.
- fields Sequence[str]
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name str
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- description str
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- validation
Parameter
Validation - Optional. Validation rules to be applied to this parameter's value.
- fields List<String>
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name String
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- description String
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- validation Property Map
- Optional. Validation rules to be applied to this parameter's value.
TemplateParameterResponse, TemplateParameterResponseArgs
- Description string
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- Fields List<string>
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- Name string
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- Validation
Pulumi.
Google Native. Dataproc. V1Beta2. Inputs. Parameter Validation Response - Optional. Validation rules to be applied to this parameter's value.
- Description string
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- Fields []string
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- Name string
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- Validation
Parameter
Validation Response - Optional. Validation rules to be applied to this parameter's value.
- description String
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- fields List<String>
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name String
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- validation
Parameter
Validation Response - Optional. Validation rules to be applied to this parameter's value.
- description string
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- fields string[]
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name string
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- validation
Parameter
Validation Response - Optional. Validation rules to be applied to this parameter's value.
- description str
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- fields Sequence[str]
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name str
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- validation
Parameter
Validation Response - Optional. Validation rules to be applied to this parameter's value.
- description String
- Optional. Brief description of the parameter. Must not exceed 1024 characters.
- fields List<String>
- Paths to all fields that the parameter replaces. A field is allowed to appear in at most one parameter's list of field paths.A field path is similar in syntax to a google.protobuf.FieldMask. For example, a field path that references the zone field of a workflow template's cluster selector would be specified as placement.clusterSelector.zone.Also, field paths can reference fields using the following syntax: Values in maps can be referenced by key: labels'key' placement.clusterSelector.clusterLabels'key' placement.managedCluster.labels'key' placement.clusterSelector.clusterLabels'key' jobs'step-id'.labels'key' Jobs in the jobs list can be referenced by step-id: jobs'step-id'.hadoopJob.mainJarFileUri jobs'step-id'.hiveJob.queryFileUri jobs'step-id'.pySparkJob.mainPythonFileUri jobs'step-id'.hadoopJob.jarFileUris0 jobs'step-id'.hadoopJob.archiveUris0 jobs'step-id'.hadoopJob.fileUris0 jobs'step-id'.pySparkJob.pythonFileUris0 Items in repeated fields can be referenced by a zero-based index: jobs'step-id'.sparkJob.args0 Other examples: jobs'step-id'.hadoopJob.properties'key' jobs'step-id'.hadoopJob.args0 jobs'step-id'.hiveJob.scriptVariables'key' jobs'step-id'.hadoopJob.mainJarFileUri placement.clusterSelector.zoneIt may not be possible to parameterize maps and repeated fields in their entirety since only individual map values and individual items in repeated fields can be referenced. For example, the following field paths are invalid: placement.clusterSelector.clusterLabels jobs'step-id'.sparkJob.args
- name String
- Parameter name. The parameter name is used as the key, and paired with the parameter value, which are passed to the template when the template is instantiated. The name must contain only capital letters (A-Z), numbers (0-9), and underscores (_), and must not start with a number. The maximum length is 40 characters.
- validation Property Map
- Optional. Validation rules to be applied to this parameter's value.
ValueValidation, ValueValidationArgs
- Values List<string>
- List of allowed values for the parameter.
- Values []string
- List of allowed values for the parameter.
- values List<String>
- List of allowed values for the parameter.
- values string[]
- List of allowed values for the parameter.
- values Sequence[str]
- List of allowed values for the parameter.
- values List<String>
- List of allowed values for the parameter.
ValueValidationResponse, ValueValidationResponseArgs
- Values List<string>
- List of allowed values for the parameter.
- Values []string
- List of allowed values for the parameter.
- values List<String>
- List of allowed values for the parameter.
- values string[]
- List of allowed values for the parameter.
- values Sequence[str]
- List of allowed values for the parameter.
- values List<String>
- List of allowed values for the parameter.
WorkflowTemplatePlacement, WorkflowTemplatePlacementArgs
- Cluster
Selector Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Cluster Selector - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- Managed
Cluster Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Managed Cluster - Optional. A cluster that is managed by the workflow.
- Cluster
Selector ClusterSelector - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- Managed
Cluster ManagedCluster - Optional. A cluster that is managed by the workflow.
- cluster
Selector ClusterSelector - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed
Cluster ManagedCluster - Optional. A cluster that is managed by the workflow.
- cluster
Selector ClusterSelector - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed
Cluster ManagedCluster - Optional. A cluster that is managed by the workflow.
- cluster_
selector ClusterSelector - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed_
cluster ManagedCluster - Optional. A cluster that is managed by the workflow.
- cluster
Selector Property Map - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed
Cluster Property Map - Optional. A cluster that is managed by the workflow.
WorkflowTemplatePlacementResponse, WorkflowTemplatePlacementResponseArgs
- Cluster
Selector Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Cluster Selector Response - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- Managed
Cluster Pulumi.Google Native. Dataproc. V1Beta2. Inputs. Managed Cluster Response - Optional. A cluster that is managed by the workflow.
- Cluster
Selector ClusterSelector Response - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- Managed
Cluster ManagedCluster Response - Optional. A cluster that is managed by the workflow.
- cluster
Selector ClusterSelector Response - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed
Cluster ManagedCluster Response - Optional. A cluster that is managed by the workflow.
- cluster
Selector ClusterSelector Response - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed
Cluster ManagedCluster Response - Optional. A cluster that is managed by the workflow.
- cluster_
selector ClusterSelector Response - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed_
cluster ManagedCluster Response - Optional. A cluster that is managed by the workflow.
- cluster
Selector Property Map - Optional. A selector that chooses target cluster for jobs based on metadata.The selector is evaluated at the time each job is submitted.
- managed
Cluster Property Map - Optional. A cluster that is managed by the workflow.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.