Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataflow/v1b3.getJob
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Gets the state of the specified Cloud Dataflow job. To get the state of a job, we recommend using projects.locations.jobs.get
with a [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints). Using projects.jobs.get
is not recommended, as you can only get the state of jobs that are running in us-central1
.
Using getJob
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getJob(args: GetJobArgs, opts?: InvokeOptions): Promise<GetJobResult>
function getJobOutput(args: GetJobOutputArgs, opts?: InvokeOptions): Output<GetJobResult>
def get_job(job_id: Optional[str] = None,
location: Optional[str] = None,
project: Optional[str] = None,
view: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetJobResult
def get_job_output(job_id: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
view: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetJobResult]
func LookupJob(ctx *Context, args *LookupJobArgs, opts ...InvokeOption) (*LookupJobResult, error)
func LookupJobOutput(ctx *Context, args *LookupJobOutputArgs, opts ...InvokeOption) LookupJobResultOutput
> Note: This function is named LookupJob
in the Go SDK.
public static class GetJob
{
public static Task<GetJobResult> InvokeAsync(GetJobArgs args, InvokeOptions? opts = null)
public static Output<GetJobResult> Invoke(GetJobInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetJobResult> getJob(GetJobArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
fn::invoke:
function: google-native:dataflow/v1b3:getJob
arguments:
# arguments dictionary
The following arguments are supported:
getJob Result
The following output properties are available:
- Client
Request stringId - The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- Create
Time string - The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- Created
From stringSnapshot Id - If this is specified, the job's initial state is populated from the given snapshot.
- Current
State string - The current state of the job. Jobs are created in the
JOB_STATE_STOPPED
state unless otherwise specified. A job in theJOB_STATE_RUNNING
state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it. - Current
State stringTime - The timestamp associated with the current state.
- Environment
Pulumi.
Google Native. Dataflow. V1b3. Outputs. Environment Response - The environment for the job.
- Execution
Info Pulumi.Google Native. Dataflow. V1b3. Outputs. Job Execution Info Response - Deprecated.
- Job
Metadata Pulumi.Google Native. Dataflow. V1b3. Outputs. Job Metadata Response - This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- Labels Dictionary<string, string>
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- Name string
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression
[a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- Pipeline
Description Pulumi.Google Native. Dataflow. V1b3. Outputs. Pipeline Description Response - Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- Replace
Job stringId - If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a
CreateJobRequest
, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job. - Replaced
By stringJob Id - If another job is an update of this job (and thus, this job is in
JOB_STATE_UPDATED
), this field contains the ID of that job. - Requested
State string - The job's requested state. Applies to
UpdateJob
requests. Setrequested_state
withUpdateJob
requests to switch between the statesJOB_STATE_STOPPED
andJOB_STATE_RUNNING
. You can also useUpdateJob
requests to change a job's state fromJOB_STATE_RUNNING
toJOB_STATE_CANCELLED
,JOB_STATE_DONE
, orJOB_STATE_DRAINED
. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJob
requests. - Runtime
Updatable Pulumi.Params Google Native. Dataflow. V1b3. Outputs. Runtime Updatable Params Response - This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- Satisfies
Pzi bool - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- Satisfies
Pzs bool - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- Stage
States List<Pulumi.Google Native. Dataflow. V1b3. Outputs. Execution Stage State Response> - This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- Start
Time string - The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- Steps
List<Pulumi.
Google Native. Dataflow. V1b3. Outputs. Step Response> - Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- Steps
Location string - The Cloud Storage location where the steps are stored.
- Temp
Files List<string> - A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Transform
Name Dictionary<string, string>Mapping - The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- Type string
- The type of Cloud Dataflow job.
- Client
Request stringId - The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- Create
Time string - The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- Created
From stringSnapshot Id - If this is specified, the job's initial state is populated from the given snapshot.
- Current
State string - The current state of the job. Jobs are created in the
JOB_STATE_STOPPED
state unless otherwise specified. A job in theJOB_STATE_RUNNING
state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it. - Current
State stringTime - The timestamp associated with the current state.
- Environment
Environment
Response - The environment for the job.
- Execution
Info JobExecution Info Response - Deprecated.
- Job
Metadata JobMetadata Response - This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- Labels map[string]string
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- Location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- Name string
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression
[a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- Pipeline
Description PipelineDescription Response - Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- Project string
- The ID of the Cloud Platform project that the job belongs to.
- Replace
Job stringId - If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a
CreateJobRequest
, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job. - Replaced
By stringJob Id - If another job is an update of this job (and thus, this job is in
JOB_STATE_UPDATED
), this field contains the ID of that job. - Requested
State string - The job's requested state. Applies to
UpdateJob
requests. Setrequested_state
withUpdateJob
requests to switch between the statesJOB_STATE_STOPPED
andJOB_STATE_RUNNING
. You can also useUpdateJob
requests to change a job's state fromJOB_STATE_RUNNING
toJOB_STATE_CANCELLED
,JOB_STATE_DONE
, orJOB_STATE_DRAINED
. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJob
requests. - Runtime
Updatable RuntimeParams Updatable Params Response - This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- Satisfies
Pzi bool - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- Satisfies
Pzs bool - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- Stage
States []ExecutionStage State Response - This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- Start
Time string - The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- Steps
[]Step
Response - Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- Steps
Location string - The Cloud Storage location where the steps are stored.
- Temp
Files []string - A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Transform
Name map[string]stringMapping - The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- Type string
- The type of Cloud Dataflow job.
- client
Request StringId - The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- create
Time String - The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- created
From StringSnapshot Id - If this is specified, the job's initial state is populated from the given snapshot.
- current
State String - The current state of the job. Jobs are created in the
JOB_STATE_STOPPED
state unless otherwise specified. A job in theJOB_STATE_RUNNING
state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it. - current
State StringTime - The timestamp associated with the current state.
- environment
Environment
Response - The environment for the job.
- execution
Info JobExecution Info Response - Deprecated.
- job
Metadata JobMetadata Response - This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- labels Map<String,String>
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- name String
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression
[a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- pipeline
Description PipelineDescription Response - Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- replace
Job StringId - If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a
CreateJobRequest
, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job. - replaced
By StringJob Id - If another job is an update of this job (and thus, this job is in
JOB_STATE_UPDATED
), this field contains the ID of that job. - requested
State String - The job's requested state. Applies to
UpdateJob
requests. Setrequested_state
withUpdateJob
requests to switch between the statesJOB_STATE_STOPPED
andJOB_STATE_RUNNING
. You can also useUpdateJob
requests to change a job's state fromJOB_STATE_RUNNING
toJOB_STATE_CANCELLED
,JOB_STATE_DONE
, orJOB_STATE_DRAINED
. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJob
requests. - runtime
Updatable RuntimeParams Updatable Params Response - This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- satisfies
Pzi Boolean - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- satisfies
Pzs Boolean - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- stage
States List<ExecutionStage State Response> - This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- start
Time String - The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- steps
List<Step
Response> - Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- steps
Location String - The Cloud Storage location where the steps are stored.
- temp
Files List<String> - A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- transform
Name Map<String,String>Mapping - The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- type String
- The type of Cloud Dataflow job.
- client
Request stringId - The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- create
Time string - The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- created
From stringSnapshot Id - If this is specified, the job's initial state is populated from the given snapshot.
- current
State string - The current state of the job. Jobs are created in the
JOB_STATE_STOPPED
state unless otherwise specified. A job in theJOB_STATE_RUNNING
state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it. - current
State stringTime - The timestamp associated with the current state.
- environment
Environment
Response - The environment for the job.
- execution
Info JobExecution Info Response - Deprecated.
- job
Metadata JobMetadata Response - This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- labels {[key: string]: string}
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- location string
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- name string
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression
[a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- pipeline
Description PipelineDescription Response - Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- project string
- The ID of the Cloud Platform project that the job belongs to.
- replace
Job stringId - If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a
CreateJobRequest
, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job. - replaced
By stringJob Id - If another job is an update of this job (and thus, this job is in
JOB_STATE_UPDATED
), this field contains the ID of that job. - requested
State string - The job's requested state. Applies to
UpdateJob
requests. Setrequested_state
withUpdateJob
requests to switch between the statesJOB_STATE_STOPPED
andJOB_STATE_RUNNING
. You can also useUpdateJob
requests to change a job's state fromJOB_STATE_RUNNING
toJOB_STATE_CANCELLED
,JOB_STATE_DONE
, orJOB_STATE_DRAINED
. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJob
requests. - runtime
Updatable RuntimeParams Updatable Params Response - This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- satisfies
Pzi boolean - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- satisfies
Pzs boolean - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- stage
States ExecutionStage State Response[] - This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- start
Time string - The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- steps
Step
Response[] - Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- steps
Location string - The Cloud Storage location where the steps are stored.
- temp
Files string[] - A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- transform
Name {[key: string]: string}Mapping - The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- type string
- The type of Cloud Dataflow job.
- client_
request_ strid - The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- create_
time str - The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- created_
from_ strsnapshot_ id - If this is specified, the job's initial state is populated from the given snapshot.
- current_
state str - The current state of the job. Jobs are created in the
JOB_STATE_STOPPED
state unless otherwise specified. A job in theJOB_STATE_RUNNING
state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it. - current_
state_ strtime - The timestamp associated with the current state.
- environment
Environment
Response - The environment for the job.
- execution_
info JobExecution Info Response - Deprecated.
- job_
metadata JobMetadata Response - This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- labels Mapping[str, str]
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- location str
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- name str
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression
[a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- pipeline_
description PipelineDescription Response - Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- project str
- The ID of the Cloud Platform project that the job belongs to.
- replace_
job_ strid - If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a
CreateJobRequest
, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job. - replaced_
by_ strjob_ id - If another job is an update of this job (and thus, this job is in
JOB_STATE_UPDATED
), this field contains the ID of that job. - requested_
state str - The job's requested state. Applies to
UpdateJob
requests. Setrequested_state
withUpdateJob
requests to switch between the statesJOB_STATE_STOPPED
andJOB_STATE_RUNNING
. You can also useUpdateJob
requests to change a job's state fromJOB_STATE_RUNNING
toJOB_STATE_CANCELLED
,JOB_STATE_DONE
, orJOB_STATE_DRAINED
. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJob
requests. - runtime_
updatable_ Runtimeparams Updatable Params Response - This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- satisfies_
pzi bool - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- satisfies_
pzs bool - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- stage_
states Sequence[ExecutionStage State Response] - This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- start_
time str - The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- steps
Sequence[Step
Response] - Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- steps_
location str - The Cloud Storage location where the steps are stored.
- temp_
files Sequence[str] - A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- transform_
name_ Mapping[str, str]mapping - The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- type str
- The type of Cloud Dataflow job.
- client
Request StringId - The client's unique identifier of the job, re-used across retried attempts. If this field is set, the service will ensure its uniqueness. The request to create a job will fail if the service has knowledge of a previously submitted job with the same client's ID and job name. The caller may use this field to ensure idempotence of job creation across retried attempts to create a job. By default, the field is empty and, in that case, the service ignores it.
- create
Time String - The timestamp when the job was initially created. Immutable and set by the Cloud Dataflow service.
- created
From StringSnapshot Id - If this is specified, the job's initial state is populated from the given snapshot.
- current
State String - The current state of the job. Jobs are created in the
JOB_STATE_STOPPED
state unless otherwise specified. A job in theJOB_STATE_RUNNING
state may asynchronously enter a terminal state. After a job has reached a terminal state, no further state updates may be made. This field may be mutated by the Cloud Dataflow service; callers cannot mutate it. - current
State StringTime - The timestamp associated with the current state.
- environment Property Map
- The environment for the job.
- execution
Info Property Map - Deprecated.
- job
Metadata Property Map - This field is populated by the Dataflow service to support filtering jobs by the metadata values provided here. Populated for ListJobs and all GetJob views SUMMARY and higher.
- labels Map<String>
- User-defined labels for this job. The labels map can contain no more than 64 entries. Entries of the labels map are UTF8 strings that comply with the following restrictions: * Keys must conform to regexp: \p{Ll}\p{Lo}{0,62} * Values must conform to regexp: [\p{Ll}\p{Lo}\p{N}_-]{0,63} * Both keys and values are additionally constrained to be <= 128 bytes in size.
- location String
- The [regional endpoint] (https://cloud.google.com/dataflow/docs/concepts/regional-endpoints) that contains this job.
- name String
- The user-specified Cloud Dataflow job name. Only one Job with a given name can exist in a project within one region at any given time. Jobs in different regions can have the same name. If a caller attempts to create a Job with the same name as an already-existing Job, the attempt returns the existing Job. The name must match the regular expression
[a-z]([-a-z0-9]{0,1022}[a-z0-9])?
- pipeline
Description Property Map - Preliminary field: The format of this data may change at any time. A description of the user pipeline and stages through which it is executed. Created by Cloud Dataflow service. Only retrieved with JOB_VIEW_DESCRIPTION or JOB_VIEW_ALL.
- project String
- The ID of the Cloud Platform project that the job belongs to.
- replace
Job StringId - If this job is an update of an existing job, this field is the job ID of the job it replaced. When sending a
CreateJobRequest
, you can update a job by specifying it here. The job named here is stopped, and its intermediate state is transferred to this job. - replaced
By StringJob Id - If another job is an update of this job (and thus, this job is in
JOB_STATE_UPDATED
), this field contains the ID of that job. - requested
State String - The job's requested state. Applies to
UpdateJob
requests. Setrequested_state
withUpdateJob
requests to switch between the statesJOB_STATE_STOPPED
andJOB_STATE_RUNNING
. You can also useUpdateJob
requests to change a job's state fromJOB_STATE_RUNNING
toJOB_STATE_CANCELLED
,JOB_STATE_DONE
, orJOB_STATE_DRAINED
. These states irrevocably terminate the job if it hasn't already reached a terminal state. This field has no effect onCreateJob
requests. - runtime
Updatable Property MapParams - This field may ONLY be modified at runtime using the projects.jobs.update method to adjust job behavior. This field has no effect when specified at job creation.
- satisfies
Pzi Boolean - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- satisfies
Pzs Boolean - Reserved for future use. This field is set only in responses from the server; it is ignored if it is set in any requests.
- stage
States List<Property Map> - This field may be mutated by the Cloud Dataflow service; callers cannot mutate it.
- start
Time String - The timestamp when the job was started (transitioned to JOB_STATE_PENDING). Flexible resource scheduling jobs are started with some delay after job creation, so start_time is unset before start and is updated when the job is started by the Cloud Dataflow service. For other jobs, start_time always equals to create_time and is immutable and set by the Cloud Dataflow service.
- steps List<Property Map>
- Exactly one of step or steps_location should be specified. The top-level steps that constitute the entire job. Only retrieved with JOB_VIEW_ALL.
- steps
Location String - The Cloud Storage location where the steps are stored.
- temp
Files List<String> - A set of files the system should be aware of that are used for temporary storage. These temporary files will be removed on job completion. No duplicates are allowed. No file patterns are supported. The supported files are: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- transform
Name Map<String>Mapping - The map of transform name prefixes of the job to be replaced to the corresponding name prefixes of the new job.
- type String
- The type of Cloud Dataflow job.
Supporting Types
AutoscalingSettingsResponse
- Algorithm string
- The algorithm to use for autoscaling.
- Max
Num intWorkers - The maximum number of workers to cap scaling at.
- Algorithm string
- The algorithm to use for autoscaling.
- Max
Num intWorkers - The maximum number of workers to cap scaling at.
- algorithm String
- The algorithm to use for autoscaling.
- max
Num IntegerWorkers - The maximum number of workers to cap scaling at.
- algorithm string
- The algorithm to use for autoscaling.
- max
Num numberWorkers - The maximum number of workers to cap scaling at.
- algorithm str
- The algorithm to use for autoscaling.
- max_
num_ intworkers - The maximum number of workers to cap scaling at.
- algorithm String
- The algorithm to use for autoscaling.
- max
Num NumberWorkers - The maximum number of workers to cap scaling at.
BigQueryIODetailsResponse
BigTableIODetailsResponse
- Instance
Id string - InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- Table
Id string - TableId accessed in the connection.
- Instance
Id string - InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- Table
Id string - TableId accessed in the connection.
- instance
Id String - InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
- table
Id String - TableId accessed in the connection.
- instance
Id string - InstanceId accessed in the connection.
- project string
- ProjectId accessed in the connection.
- table
Id string - TableId accessed in the connection.
- instance_
id str - InstanceId accessed in the connection.
- project str
- ProjectId accessed in the connection.
- table_
id str - TableId accessed in the connection.
- instance
Id String - InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
- table
Id String - TableId accessed in the connection.
ComponentSourceResponse
- Name string
- Dataflow service generated name for this source.
- Original
Transform stringOr Collection - User name for the original user transform or collection with which this source is most closely associated.
- User
Name string - Human-readable name for this transform; may be user or system generated.
- Name string
- Dataflow service generated name for this source.
- Original
Transform stringOr Collection - User name for the original user transform or collection with which this source is most closely associated.
- User
Name string - Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- original
Transform StringOr Collection - User name for the original user transform or collection with which this source is most closely associated.
- user
Name String - Human-readable name for this transform; may be user or system generated.
- name string
- Dataflow service generated name for this source.
- original
Transform stringOr Collection - User name for the original user transform or collection with which this source is most closely associated.
- user
Name string - Human-readable name for this transform; may be user or system generated.
- name str
- Dataflow service generated name for this source.
- original_
transform_ stror_ collection - User name for the original user transform or collection with which this source is most closely associated.
- user_
name str - Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- original
Transform StringOr Collection - User name for the original user transform or collection with which this source is most closely associated.
- user
Name String - Human-readable name for this transform; may be user or system generated.
ComponentTransformResponse
- Name string
- Dataflow service generated name for this source.
- Original
Transform string - User name for the original user transform with which this transform is most closely associated.
- User
Name string - Human-readable name for this transform; may be user or system generated.
- Name string
- Dataflow service generated name for this source.
- Original
Transform string - User name for the original user transform with which this transform is most closely associated.
- User
Name string - Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- original
Transform String - User name for the original user transform with which this transform is most closely associated.
- user
Name String - Human-readable name for this transform; may be user or system generated.
- name string
- Dataflow service generated name for this source.
- original
Transform string - User name for the original user transform with which this transform is most closely associated.
- user
Name string - Human-readable name for this transform; may be user or system generated.
- name str
- Dataflow service generated name for this source.
- original_
transform str - User name for the original user transform with which this transform is most closely associated.
- user_
name str - Human-readable name for this transform; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- original
Transform String - User name for the original user transform with which this transform is most closely associated.
- user
Name String - Human-readable name for this transform; may be user or system generated.
DataSamplingConfigResponse
- Behaviors List<string>
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- Behaviors []string
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors List<String>
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors string[]
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors Sequence[str]
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
- behaviors List<String>
- List of given sampling behaviors to enable. For example, specifying behaviors = [ALWAYS_ON] samples in-flight elements but does not sample exceptions. Can be used to specify multiple behaviors like, behaviors = [ALWAYS_ON, EXCEPTIONS] for specifying periodic sampling and exception sampling. If DISABLED is in the list, then sampling will be disabled and ignore the other given behaviors. Ordering does not matter.
DatastoreIODetailsResponse
DebugOptionsResponse
- Data
Sampling Pulumi.Google Native. Dataflow. V1b3. Inputs. Data Sampling Config Response - Configuration options for sampling elements from a running pipeline.
- Enable
Hot boolKey Logging - When true, enables the logging of the literal hot key to the user's Cloud Logging.
- Data
Sampling DataSampling Config Response - Configuration options for sampling elements from a running pipeline.
- Enable
Hot boolKey Logging - When true, enables the logging of the literal hot key to the user's Cloud Logging.
- data
Sampling DataSampling Config Response - Configuration options for sampling elements from a running pipeline.
- enable
Hot BooleanKey Logging - When true, enables the logging of the literal hot key to the user's Cloud Logging.
- data
Sampling DataSampling Config Response - Configuration options for sampling elements from a running pipeline.
- enable
Hot booleanKey Logging - When true, enables the logging of the literal hot key to the user's Cloud Logging.
- data_
sampling DataSampling Config Response - Configuration options for sampling elements from a running pipeline.
- enable_
hot_ boolkey_ logging - When true, enables the logging of the literal hot key to the user's Cloud Logging.
- data
Sampling Property Map - Configuration options for sampling elements from a running pipeline.
- enable
Hot BooleanKey Logging - When true, enables the logging of the literal hot key to the user's Cloud Logging.
DiskResponse
- Disk
Type string - Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- Mount
Point string - Directory in a VM where disk is mounted.
- Size
Gb int - Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- Disk
Type string - Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- Mount
Point string - Directory in a VM where disk is mounted.
- Size
Gb int - Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk
Type String - Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mount
Point String - Directory in a VM where disk is mounted.
- size
Gb Integer - Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk
Type string - Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mount
Point string - Directory in a VM where disk is mounted.
- size
Gb number - Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk_
type str - Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mount_
point str - Directory in a VM where disk is mounted.
- size_
gb int - Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk
Type String - Disk storage type, as defined by Google Compute Engine. This must be a disk type appropriate to the project and zone in which the workers will run. If unknown or unspecified, the service will attempt to choose a reasonable default. For example, the standard persistent disk type is a resource name typically ending in "pd-standard". If SSD persistent disks are available, the resource name typically ends with "pd-ssd". The actual valid values are defined the Google Compute Engine API, not by the Cloud Dataflow API; consult the Google Compute Engine documentation for more information about determining the set of available disk types for a particular project and zone. Google Compute Engine Disk types are local to a particular project in a particular zone, and so the resource name will typically look something like this: compute.googleapis.com/projects/project-id/zones/zone/diskTypes/pd-standard
- mount
Point String - Directory in a VM where disk is mounted.
- size
Gb Number - Size of disk in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
DisplayDataResponse
- Bool
Value bool - Contains value if the data is of a boolean type.
- Duration
Value string - Contains value if the data is of duration type.
- Float
Value double - Contains value if the data is of float type.
- Int64Value string
- Contains value if the data is of int64 type.
- Java
Class stringValue - Contains value if the data is of java class type.
- Key string
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- Label string
- An optional label to display in a dax UI for the element.
- Namespace string
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- Short
Str stringValue - A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- Str
Value string - Contains value if the data is of string type.
- Timestamp
Value string - Contains value if the data is of timestamp type.
- Url string
- An optional full URL.
- Bool
Value bool - Contains value if the data is of a boolean type.
- Duration
Value string - Contains value if the data is of duration type.
- Float
Value float64 - Contains value if the data is of float type.
- Int64Value string
- Contains value if the data is of int64 type.
- Java
Class stringValue - Contains value if the data is of java class type.
- Key string
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- Label string
- An optional label to display in a dax UI for the element.
- Namespace string
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- Short
Str stringValue - A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- Str
Value string - Contains value if the data is of string type.
- Timestamp
Value string - Contains value if the data is of timestamp type.
- Url string
- An optional full URL.
- bool
Value Boolean - Contains value if the data is of a boolean type.
- duration
Value String - Contains value if the data is of duration type.
- float
Value Double - Contains value if the data is of float type.
- int64Value String
- Contains value if the data is of int64 type.
- java
Class StringValue - Contains value if the data is of java class type.
- key String
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label String
- An optional label to display in a dax UI for the element.
- namespace String
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- short
Str StringValue - A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- str
Value String - Contains value if the data is of string type.
- timestamp
Value String - Contains value if the data is of timestamp type.
- url String
- An optional full URL.
- bool
Value boolean - Contains value if the data is of a boolean type.
- duration
Value string - Contains value if the data is of duration type.
- float
Value number - Contains value if the data is of float type.
- int64Value string
- Contains value if the data is of int64 type.
- java
Class stringValue - Contains value if the data is of java class type.
- key string
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label string
- An optional label to display in a dax UI for the element.
- namespace string
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- short
Str stringValue - A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- str
Value string - Contains value if the data is of string type.
- timestamp
Value string - Contains value if the data is of timestamp type.
- url string
- An optional full URL.
- bool_
value bool - Contains value if the data is of a boolean type.
- duration_
value str - Contains value if the data is of duration type.
- float_
value float - Contains value if the data is of float type.
- int64_
value str - Contains value if the data is of int64 type.
- java_
class_ strvalue - Contains value if the data is of java class type.
- key str
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label str
- An optional label to display in a dax UI for the element.
- namespace str
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- short_
str_ strvalue - A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- str_
value str - Contains value if the data is of string type.
- timestamp_
value str - Contains value if the data is of timestamp type.
- url str
- An optional full URL.
- bool
Value Boolean - Contains value if the data is of a boolean type.
- duration
Value String - Contains value if the data is of duration type.
- float
Value Number - Contains value if the data is of float type.
- int64Value String
- Contains value if the data is of int64 type.
- java
Class StringValue - Contains value if the data is of java class type.
- key String
- The key identifying the display data. This is intended to be used as a label for the display data when viewed in a dax monitoring system.
- label String
- An optional label to display in a dax UI for the element.
- namespace String
- The namespace for the key. This is usually a class name or programming language namespace (i.e. python module) which defines the display data. This allows a dax monitoring system to specially handle the data and perform custom rendering.
- short
Str StringValue - A possible additional shorter value to display. For example a java_class_name_value of com.mypackage.MyDoFn will be stored with MyDoFn as the short_str_value and com.mypackage.MyDoFn as the java_class_name value. short_str_value can be displayed and java_class_name_value will be displayed as a tooltip.
- str
Value String - Contains value if the data is of string type.
- timestamp
Value String - Contains value if the data is of timestamp type.
- url String
- An optional full URL.
EnvironmentResponse
- Cluster
Manager stringApi Service - The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- Dataset string
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- Debug
Options Pulumi.Google Native. Dataflow. V1b3. Inputs. Debug Options Response - Any debugging options to be supplied to the job.
- Experiments List<string>
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- Flex
Resource stringScheduling Goal - Which Flexible Resource Scheduling mode to run in.
- Internal
Experiments Dictionary<string, string> - Experimental settings.
- Sdk
Pipeline Dictionary<string, string>Options - The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- Service
Account stringEmail - Identity to run virtual machines as. Defaults to the default account.
- Service
Kms stringKey Name - If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- Service
Options List<string> - The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- Shuffle
Mode string - The shuffle mode used for the job.
- Temp
Storage stringPrefix - The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Use
Streaming boolEngine Resource Based Billing - Whether the job uses the new streaming engine billing model based on resource usage.
- User
Agent Dictionary<string, string> - A description of the process that generated the request.
- Version Dictionary<string, string>
- A structure describing which components and their versions of the service are required in order to run the job.
- Worker
Pools List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Worker Pool Response> - The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- Worker
Region string - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- Worker
Zone string - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- Cluster
Manager stringApi Service - The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- Dataset string
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- Debug
Options DebugOptions Response - Any debugging options to be supplied to the job.
- Experiments []string
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- Flex
Resource stringScheduling Goal - Which Flexible Resource Scheduling mode to run in.
- Internal
Experiments map[string]string - Experimental settings.
- Sdk
Pipeline map[string]stringOptions - The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- Service
Account stringEmail - Identity to run virtual machines as. Defaults to the default account.
- Service
Kms stringKey Name - If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- Service
Options []string - The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- Shuffle
Mode string - The shuffle mode used for the job.
- Temp
Storage stringPrefix - The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Use
Streaming boolEngine Resource Based Billing - Whether the job uses the new streaming engine billing model based on resource usage.
- User
Agent map[string]string - A description of the process that generated the request.
- Version map[string]string
- A structure describing which components and their versions of the service are required in order to run the job.
- Worker
Pools []WorkerPool Response - The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- Worker
Region string - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- Worker
Zone string - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- cluster
Manager StringApi Service - The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset String
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debug
Options DebugOptions Response - Any debugging options to be supplied to the job.
- experiments List<String>
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flex
Resource StringScheduling Goal - Which Flexible Resource Scheduling mode to run in.
- internal
Experiments Map<String,String> - Experimental settings.
- sdk
Pipeline Map<String,String>Options - The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- service
Account StringEmail - Identity to run virtual machines as. Defaults to the default account.
- service
Kms StringKey Name - If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- service
Options List<String> - The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- shuffle
Mode String - The shuffle mode used for the job.
- temp
Storage StringPrefix - The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- use
Streaming BooleanEngine Resource Based Billing - Whether the job uses the new streaming engine billing model based on resource usage.
- user
Agent Map<String,String> - A description of the process that generated the request.
- version Map<String,String>
- A structure describing which components and their versions of the service are required in order to run the job.
- worker
Pools List<WorkerPool Response> - The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- worker
Region String - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone String - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- cluster
Manager stringApi Service - The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset string
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debug
Options DebugOptions Response - Any debugging options to be supplied to the job.
- experiments string[]
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flex
Resource stringScheduling Goal - Which Flexible Resource Scheduling mode to run in.
- internal
Experiments {[key: string]: string} - Experimental settings.
- sdk
Pipeline {[key: string]: string}Options - The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- service
Account stringEmail - Identity to run virtual machines as. Defaults to the default account.
- service
Kms stringKey Name - If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- service
Options string[] - The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- shuffle
Mode string - The shuffle mode used for the job.
- temp
Storage stringPrefix - The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- use
Streaming booleanEngine Resource Based Billing - Whether the job uses the new streaming engine billing model based on resource usage.
- user
Agent {[key: string]: string} - A description of the process that generated the request.
- version {[key: string]: string}
- A structure describing which components and their versions of the service are required in order to run the job.
- worker
Pools WorkerPool Response[] - The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- worker
Region string - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone string - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- cluster_
manager_ strapi_ service - The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset str
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debug_
options DebugOptions Response - Any debugging options to be supplied to the job.
- experiments Sequence[str]
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flex_
resource_ strscheduling_ goal - Which Flexible Resource Scheduling mode to run in.
- internal_
experiments Mapping[str, str] - Experimental settings.
- sdk_
pipeline_ Mapping[str, str]options - The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- service_
account_ stremail - Identity to run virtual machines as. Defaults to the default account.
- service_
kms_ strkey_ name - If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- service_
options Sequence[str] - The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- shuffle_
mode str - The shuffle mode used for the job.
- temp_
storage_ strprefix - The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- use_
streaming_ boolengine_ resource_ based_ billing - Whether the job uses the new streaming engine billing model based on resource usage.
- user_
agent Mapping[str, str] - A description of the process that generated the request.
- version Mapping[str, str]
- A structure describing which components and their versions of the service are required in order to run the job.
- worker_
pools Sequence[WorkerPool Response] - The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- worker_
region str - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker_
zone str - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
- cluster
Manager StringApi Service - The type of cluster manager API to use. If unknown or unspecified, the service will attempt to choose a reasonable default. This should be in the form of the API service name, e.g. "compute.googleapis.com".
- dataset String
- The dataset for the current project where various workflow related tables are stored. The supported resource type is: Google BigQuery: bigquery.googleapis.com/{dataset}
- debug
Options Property Map - Any debugging options to be supplied to the job.
- experiments List<String>
- The list of experiments to enable. This field should be used for SDK related experiments and not for service related experiments. The proper field for service related experiments is service_options.
- flex
Resource StringScheduling Goal - Which Flexible Resource Scheduling mode to run in.
- internal
Experiments Map<String> - Experimental settings.
- sdk
Pipeline Map<String>Options - The Cloud Dataflow SDK pipeline options specified by the user. These options are passed through the service and are used to recreate the SDK pipeline options on the worker in a language agnostic and platform independent way.
- service
Account StringEmail - Identity to run virtual machines as. Defaults to the default account.
- service
Kms StringKey Name - If set, contains the Cloud KMS key identifier used to encrypt data at rest, AKA a Customer Managed Encryption Key (CMEK). Format: projects/PROJECT_ID/locations/LOCATION/keyRings/KEY_RING/cryptoKeys/KEY
- service
Options List<String> - The list of service options to enable. This field should be used for service related experiments only. These experiments, when graduating to GA, should be replaced by dedicated fields or become default (i.e. always on).
- shuffle
Mode String - The shuffle mode used for the job.
- temp
Storage StringPrefix - The prefix of the resources the system should use for temporary storage. The system will append the suffix "/temp-{JOBNAME} to this resource prefix, where {JOBNAME} is the value of the job_name field. The resulting bucket and object prefix is used as the prefix of the resources used to store temporary data needed during the job execution. NOTE: This will override the value in taskrunner_settings. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- use
Streaming BooleanEngine Resource Based Billing - Whether the job uses the new streaming engine billing model based on resource usage.
- user
Agent Map<String> - A description of the process that generated the request.
- version Map<String>
- A structure describing which components and their versions of the service are required in order to run the job.
- worker
Pools List<Property Map> - The worker pools. At least one "harness" worker pool must be specified in order for the job to have workers.
- worker
Region String - The Compute Engine region (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1". Mutually exclusive with worker_zone. If neither worker_region nor worker_zone is specified, default to the control plane's region.
- worker
Zone String - The Compute Engine zone (https://cloud.google.com/compute/docs/regions-zones/regions-zones) in which worker processing should occur, e.g. "us-west1-a". Mutually exclusive with worker_region. If neither worker_region nor worker_zone is specified, a zone in the control plane's region is chosen based on available capacity.
ExecutionStageStateResponse
- Current
State stringTime - The time at which the stage transitioned to this state.
- Execution
Stage stringName - The name of the execution stage.
- Execution
Stage stringState - Executions stage states allow the same set of values as JobState.
- Current
State stringTime - The time at which the stage transitioned to this state.
- Execution
Stage stringName - The name of the execution stage.
- Execution
Stage stringState - Executions stage states allow the same set of values as JobState.
- current
State StringTime - The time at which the stage transitioned to this state.
- execution
Stage StringName - The name of the execution stage.
- execution
Stage StringState - Executions stage states allow the same set of values as JobState.
- current
State stringTime - The time at which the stage transitioned to this state.
- execution
Stage stringName - The name of the execution stage.
- execution
Stage stringState - Executions stage states allow the same set of values as JobState.
- current_
state_ strtime - The time at which the stage transitioned to this state.
- execution_
stage_ strname - The name of the execution stage.
- execution_
stage_ strstate - Executions stage states allow the same set of values as JobState.
- current
State StringTime - The time at which the stage transitioned to this state.
- execution
Stage StringName - The name of the execution stage.
- execution
Stage StringState - Executions stage states allow the same set of values as JobState.
ExecutionStageSummaryResponse
- Component
Source List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Component Source Response> - Collections produced and consumed by component transforms of this stage.
- Component
Transform List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Component Transform Response> - Transforms that comprise this execution stage.
- Input
Source List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Stage Source Response> - Input sources for this stage.
- Kind string
- Type of transform this stage is executing.
- Name string
- Dataflow service generated name for this stage.
- Output
Source List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Stage Source Response> - Output sources for this stage.
- Prerequisite
Stage List<string> - Other stages that must complete before this stage can run.
- Component
Source []ComponentSource Response - Collections produced and consumed by component transforms of this stage.
- Component
Transform []ComponentTransform Response - Transforms that comprise this execution stage.
- Input
Source []StageSource Response - Input sources for this stage.
- Kind string
- Type of transform this stage is executing.
- Name string
- Dataflow service generated name for this stage.
- Output
Source []StageSource Response - Output sources for this stage.
- Prerequisite
Stage []string - Other stages that must complete before this stage can run.
- component
Source List<ComponentSource Response> - Collections produced and consumed by component transforms of this stage.
- component
Transform List<ComponentTransform Response> - Transforms that comprise this execution stage.
- input
Source List<StageSource Response> - Input sources for this stage.
- kind String
- Type of transform this stage is executing.
- name String
- Dataflow service generated name for this stage.
- output
Source List<StageSource Response> - Output sources for this stage.
- prerequisite
Stage List<String> - Other stages that must complete before this stage can run.
- component
Source ComponentSource Response[] - Collections produced and consumed by component transforms of this stage.
- component
Transform ComponentTransform Response[] - Transforms that comprise this execution stage.
- input
Source StageSource Response[] - Input sources for this stage.
- kind string
- Type of transform this stage is executing.
- name string
- Dataflow service generated name for this stage.
- output
Source StageSource Response[] - Output sources for this stage.
- prerequisite
Stage string[] - Other stages that must complete before this stage can run.
- component_
source Sequence[ComponentSource Response] - Collections produced and consumed by component transforms of this stage.
- component_
transform Sequence[ComponentTransform Response] - Transforms that comprise this execution stage.
- input_
source Sequence[StageSource Response] - Input sources for this stage.
- kind str
- Type of transform this stage is executing.
- name str
- Dataflow service generated name for this stage.
- output_
source Sequence[StageSource Response] - Output sources for this stage.
- prerequisite_
stage Sequence[str] - Other stages that must complete before this stage can run.
- component
Source List<Property Map> - Collections produced and consumed by component transforms of this stage.
- component
Transform List<Property Map> - Transforms that comprise this execution stage.
- input
Source List<Property Map> - Input sources for this stage.
- kind String
- Type of transform this stage is executing.
- name String
- Dataflow service generated name for this stage.
- output
Source List<Property Map> - Output sources for this stage.
- prerequisite
Stage List<String> - Other stages that must complete before this stage can run.
FileIODetailsResponse
- File
Pattern string - File Pattern used to access files by the connector.
- File
Pattern string - File Pattern used to access files by the connector.
- file
Pattern String - File Pattern used to access files by the connector.
- file
Pattern string - File Pattern used to access files by the connector.
- file_
pattern str - File Pattern used to access files by the connector.
- file
Pattern String - File Pattern used to access files by the connector.
JobExecutionInfoResponse
- Stages Dictionary<string, string>
- A mapping from each stage to the information about that stage.
- Stages map[string]string
- A mapping from each stage to the information about that stage.
- stages Map<String,String>
- A mapping from each stage to the information about that stage.
- stages {[key: string]: string}
- A mapping from each stage to the information about that stage.
- stages Mapping[str, str]
- A mapping from each stage to the information about that stage.
- stages Map<String>
- A mapping from each stage to the information about that stage.
JobMetadataResponse
- Big
Table List<Pulumi.Details Google Native. Dataflow. V1b3. Inputs. Big Table IODetails Response> - Identification of a Cloud Bigtable source used in the Dataflow job.
- Bigquery
Details List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Big Query IODetails Response> - Identification of a BigQuery source used in the Dataflow job.
- Datastore
Details List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Datastore IODetails Response> - Identification of a Datastore source used in the Dataflow job.
- File
Details List<Pulumi.Google Native. Dataflow. V1b3. Inputs. File IODetails Response> - Identification of a File source used in the Dataflow job.
- Pubsub
Details List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Pub Sub IODetails Response> - Identification of a Pub/Sub source used in the Dataflow job.
- Sdk
Version Pulumi.Google Native. Dataflow. V1b3. Inputs. Sdk Version Response - The SDK version used to run the job.
- Spanner
Details List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Spanner IODetails Response> - Identification of a Spanner source used in the Dataflow job.
- User
Display Dictionary<string, string>Properties - List of display properties to help UI filter jobs.
- Big
Table []BigDetails Table IODetails Response - Identification of a Cloud Bigtable source used in the Dataflow job.
- Bigquery
Details []BigQuery IODetails Response - Identification of a BigQuery source used in the Dataflow job.
- Datastore
Details []DatastoreIODetails Response - Identification of a Datastore source used in the Dataflow job.
- File
Details []FileIODetails Response - Identification of a File source used in the Dataflow job.
- Pubsub
Details []PubSub IODetails Response - Identification of a Pub/Sub source used in the Dataflow job.
- Sdk
Version SdkVersion Response - The SDK version used to run the job.
- Spanner
Details []SpannerIODetails Response - Identification of a Spanner source used in the Dataflow job.
- User
Display map[string]stringProperties - List of display properties to help UI filter jobs.
- big
Table List<BigDetails Table IODetails Response> - Identification of a Cloud Bigtable source used in the Dataflow job.
- bigquery
Details List<BigQuery IODetails Response> - Identification of a BigQuery source used in the Dataflow job.
- datastore
Details List<DatastoreIODetails Response> - Identification of a Datastore source used in the Dataflow job.
- file
Details List<FileIODetails Response> - Identification of a File source used in the Dataflow job.
- pubsub
Details List<PubSub IODetails Response> - Identification of a Pub/Sub source used in the Dataflow job.
- sdk
Version SdkVersion Response - The SDK version used to run the job.
- spanner
Details List<SpannerIODetails Response> - Identification of a Spanner source used in the Dataflow job.
- user
Display Map<String,String>Properties - List of display properties to help UI filter jobs.
- big
Table BigDetails Table IODetails Response[] - Identification of a Cloud Bigtable source used in the Dataflow job.
- bigquery
Details BigQuery IODetails Response[] - Identification of a BigQuery source used in the Dataflow job.
- datastore
Details DatastoreIODetails Response[] - Identification of a Datastore source used in the Dataflow job.
- file
Details FileIODetails Response[] - Identification of a File source used in the Dataflow job.
- pubsub
Details PubSub IODetails Response[] - Identification of a Pub/Sub source used in the Dataflow job.
- sdk
Version SdkVersion Response - The SDK version used to run the job.
- spanner
Details SpannerIODetails Response[] - Identification of a Spanner source used in the Dataflow job.
- user
Display {[key: string]: string}Properties - List of display properties to help UI filter jobs.
- big_
table_ Sequence[Bigdetails Table IODetails Response] - Identification of a Cloud Bigtable source used in the Dataflow job.
- bigquery_
details Sequence[BigQuery IODetails Response] - Identification of a BigQuery source used in the Dataflow job.
- datastore_
details Sequence[DatastoreIODetails Response] - Identification of a Datastore source used in the Dataflow job.
- file_
details Sequence[FileIODetails Response] - Identification of a File source used in the Dataflow job.
- pubsub_
details Sequence[PubSub IODetails Response] - Identification of a Pub/Sub source used in the Dataflow job.
- sdk_
version SdkVersion Response - The SDK version used to run the job.
- spanner_
details Sequence[SpannerIODetails Response] - Identification of a Spanner source used in the Dataflow job.
- user_
display_ Mapping[str, str]properties - List of display properties to help UI filter jobs.
- big
Table List<Property Map>Details - Identification of a Cloud Bigtable source used in the Dataflow job.
- bigquery
Details List<Property Map> - Identification of a BigQuery source used in the Dataflow job.
- datastore
Details List<Property Map> - Identification of a Datastore source used in the Dataflow job.
- file
Details List<Property Map> - Identification of a File source used in the Dataflow job.
- pubsub
Details List<Property Map> - Identification of a Pub/Sub source used in the Dataflow job.
- sdk
Version Property Map - The SDK version used to run the job.
- spanner
Details List<Property Map> - Identification of a Spanner source used in the Dataflow job.
- user
Display Map<String>Properties - List of display properties to help UI filter jobs.
PackageResponse
PipelineDescriptionResponse
- Display
Data List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Display Data Response> - Pipeline level display data.
- Execution
Pipeline List<Pulumi.Stage Google Native. Dataflow. V1b3. Inputs. Execution Stage Summary Response> - Description of each stage of execution of the pipeline.
- Original
Pipeline List<Pulumi.Transform Google Native. Dataflow. V1b3. Inputs. Transform Summary Response> - Description of each transform in the pipeline and collections between them.
- Step
Names stringHash - A hash value of the submitted pipeline portable graph step names if exists.
- Display
Data []DisplayData Response - Pipeline level display data.
- Execution
Pipeline []ExecutionStage Stage Summary Response - Description of each stage of execution of the pipeline.
- Original
Pipeline []TransformTransform Summary Response - Description of each transform in the pipeline and collections between them.
- Step
Names stringHash - A hash value of the submitted pipeline portable graph step names if exists.
- display
Data List<DisplayData Response> - Pipeline level display data.
- execution
Pipeline List<ExecutionStage Stage Summary Response> - Description of each stage of execution of the pipeline.
- original
Pipeline List<TransformTransform Summary Response> - Description of each transform in the pipeline and collections between them.
- step
Names StringHash - A hash value of the submitted pipeline portable graph step names if exists.
- display
Data DisplayData Response[] - Pipeline level display data.
- execution
Pipeline ExecutionStage Stage Summary Response[] - Description of each stage of execution of the pipeline.
- original
Pipeline TransformTransform Summary Response[] - Description of each transform in the pipeline and collections between them.
- step
Names stringHash - A hash value of the submitted pipeline portable graph step names if exists.
- display_
data Sequence[DisplayData Response] - Pipeline level display data.
- execution_
pipeline_ Sequence[Executionstage Stage Summary Response] - Description of each stage of execution of the pipeline.
- original_
pipeline_ Sequence[Transformtransform Summary Response] - Description of each transform in the pipeline and collections between them.
- step_
names_ strhash - A hash value of the submitted pipeline portable graph step names if exists.
- display
Data List<Property Map> - Pipeline level display data.
- execution
Pipeline List<Property Map>Stage - Description of each stage of execution of the pipeline.
- original
Pipeline List<Property Map>Transform - Description of each transform in the pipeline and collections between them.
- step
Names StringHash - A hash value of the submitted pipeline portable graph step names if exists.
PubSubIODetailsResponse
- Subscription string
- Subscription used in the connection.
- Topic string
- Topic accessed in the connection.
- Subscription string
- Subscription used in the connection.
- Topic string
- Topic accessed in the connection.
- subscription String
- Subscription used in the connection.
- topic String
- Topic accessed in the connection.
- subscription string
- Subscription used in the connection.
- topic string
- Topic accessed in the connection.
- subscription str
- Subscription used in the connection.
- topic str
- Topic accessed in the connection.
- subscription String
- Subscription used in the connection.
- topic String
- Topic accessed in the connection.
RuntimeUpdatableParamsResponse
- Max
Num intWorkers - The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- Min
Num intWorkers - The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- Max
Num intWorkers - The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- Min
Num intWorkers - The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- max
Num IntegerWorkers - The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- min
Num IntegerWorkers - The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- max
Num numberWorkers - The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- min
Num numberWorkers - The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- max_
num_ intworkers - The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- min_
num_ intworkers - The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
- max
Num NumberWorkers - The maximum number of workers to cap autoscaling at. This field is currently only supported for Streaming Engine jobs.
- min
Num NumberWorkers - The minimum number of workers to scale down to. This field is currently only supported for Streaming Engine jobs.
SdkBugResponse
SdkHarnessContainerImageResponse
- Capabilities List<string>
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- Container
Image string - A docker container image that resides in Google Container Registry.
- Environment
Id string - Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- Use
Single boolCore Per Container - If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- Capabilities []string
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- Container
Image string - A docker container image that resides in Google Container Registry.
- Environment
Id string - Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- Use
Single boolCore Per Container - If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities List<String>
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- container
Image String - A docker container image that resides in Google Container Registry.
- environment
Id String - Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- use
Single BooleanCore Per Container - If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities string[]
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- container
Image string - A docker container image that resides in Google Container Registry.
- environment
Id string - Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- use
Single booleanCore Per Container - If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities Sequence[str]
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- container_
image str - A docker container image that resides in Google Container Registry.
- environment_
id str - Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- use_
single_ boolcore_ per_ container - If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
- capabilities List<String>
- The set of capabilities enumerated in the above Environment proto. See also beam_runner_api.proto
- container
Image String - A docker container image that resides in Google Container Registry.
- environment
Id String - Environment ID for the Beam runner API proto Environment that corresponds to the current SDK Harness.
- use
Single BooleanCore Per Container - If true, recommends the Dataflow service to use only one core per SDK container instance with this image. If false (or unset) recommends using more than one core per SDK container instance with this image for efficiency. Note that Dataflow service may choose to override this property if needed.
SdkVersionResponse
- Bugs
List<Pulumi.
Google Native. Dataflow. V1b3. Inputs. Sdk Bug Response> - Known bugs found in this SDK version.
- Sdk
Support stringStatus - The support status for this SDK version.
- Version string
- The version of the SDK used to run the job.
- Version
Display stringName - A readable string describing the version of the SDK.
- Bugs
[]Sdk
Bug Response - Known bugs found in this SDK version.
- Sdk
Support stringStatus - The support status for this SDK version.
- Version string
- The version of the SDK used to run the job.
- Version
Display stringName - A readable string describing the version of the SDK.
- bugs
List<Sdk
Bug Response> - Known bugs found in this SDK version.
- sdk
Support StringStatus - The support status for this SDK version.
- version String
- The version of the SDK used to run the job.
- version
Display StringName - A readable string describing the version of the SDK.
- bugs
Sdk
Bug Response[] - Known bugs found in this SDK version.
- sdk
Support stringStatus - The support status for this SDK version.
- version string
- The version of the SDK used to run the job.
- version
Display stringName - A readable string describing the version of the SDK.
- bugs
Sequence[Sdk
Bug Response] - Known bugs found in this SDK version.
- sdk_
support_ strstatus - The support status for this SDK version.
- version str
- The version of the SDK used to run the job.
- version_
display_ strname - A readable string describing the version of the SDK.
- bugs List<Property Map>
- Known bugs found in this SDK version.
- sdk
Support StringStatus - The support status for this SDK version.
- version String
- The version of the SDK used to run the job.
- version
Display StringName - A readable string describing the version of the SDK.
SpannerIODetailsResponse
- Database
Id string - DatabaseId accessed in the connection.
- Instance
Id string - InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- Database
Id string - DatabaseId accessed in the connection.
- Instance
Id string - InstanceId accessed in the connection.
- Project string
- ProjectId accessed in the connection.
- database
Id String - DatabaseId accessed in the connection.
- instance
Id String - InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
- database
Id string - DatabaseId accessed in the connection.
- instance
Id string - InstanceId accessed in the connection.
- project string
- ProjectId accessed in the connection.
- database_
id str - DatabaseId accessed in the connection.
- instance_
id str - InstanceId accessed in the connection.
- project str
- ProjectId accessed in the connection.
- database
Id String - DatabaseId accessed in the connection.
- instance
Id String - InstanceId accessed in the connection.
- project String
- ProjectId accessed in the connection.
StageSourceResponse
- Name string
- Dataflow service generated name for this source.
- Original
Transform stringOr Collection - User name for the original user transform or collection with which this source is most closely associated.
- Size
Bytes string - Size of the source, if measurable.
- User
Name string - Human-readable name for this source; may be user or system generated.
- Name string
- Dataflow service generated name for this source.
- Original
Transform stringOr Collection - User name for the original user transform or collection with which this source is most closely associated.
- Size
Bytes string - Size of the source, if measurable.
- User
Name string - Human-readable name for this source; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- original
Transform StringOr Collection - User name for the original user transform or collection with which this source is most closely associated.
- size
Bytes String - Size of the source, if measurable.
- user
Name String - Human-readable name for this source; may be user or system generated.
- name string
- Dataflow service generated name for this source.
- original
Transform stringOr Collection - User name for the original user transform or collection with which this source is most closely associated.
- size
Bytes string - Size of the source, if measurable.
- user
Name string - Human-readable name for this source; may be user or system generated.
- name str
- Dataflow service generated name for this source.
- original_
transform_ stror_ collection - User name for the original user transform or collection with which this source is most closely associated.
- size_
bytes str - Size of the source, if measurable.
- user_
name str - Human-readable name for this source; may be user or system generated.
- name String
- Dataflow service generated name for this source.
- original
Transform StringOr Collection - User name for the original user transform or collection with which this source is most closely associated.
- size
Bytes String - Size of the source, if measurable.
- user
Name String - Human-readable name for this source; may be user or system generated.
StepResponse
- Kind string
- The kind of step in the Cloud Dataflow job.
- Name string
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- Properties Dictionary<string, string>
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- Kind string
- The kind of step in the Cloud Dataflow job.
- Name string
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- Properties map[string]string
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind String
- The kind of step in the Cloud Dataflow job.
- name String
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties Map<String,String>
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind string
- The kind of step in the Cloud Dataflow job.
- name string
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties {[key: string]: string}
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind str
- The kind of step in the Cloud Dataflow job.
- name str
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties Mapping[str, str]
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
- kind String
- The kind of step in the Cloud Dataflow job.
- name String
- The name that identifies the step. This must be unique for each step with respect to all other steps in the Cloud Dataflow job.
- properties Map<String>
- Named properties associated with the step. Each kind of predefined step has its own required set of properties. Must be provided on Create. Only retrieved with JOB_VIEW_ALL.
TaskRunnerSettingsResponse
- Alsologtostderr bool
- Whether to also send taskrunner log info to stderr.
- Base
Task stringDir - The location on the worker for task-specific subdirectories.
- Base
Url string - The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- Commandlines
File stringName - The file to store preprocessing commands in.
- Continue
On boolException - Whether to continue taskrunner if an exception is hit.
- Dataflow
Api stringVersion - The API version of endpoint, e.g. "v1b3"
- Harness
Command string - The command to launch the worker harness.
- Language
Hint string - The suggested backend language.
- Log
Dir string - The directory on the VM to store logs.
- Log
To boolSerialconsole - Whether to send taskrunner log info to Google Compute Engine VM serial console.
- Log
Upload stringLocation - Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Oauth
Scopes List<string> - The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- Parallel
Worker Pulumi.Settings Google Native. Dataflow. V1b3. Inputs. Worker Settings Response - The settings to pass to the parallel worker harness.
- Streaming
Worker stringMain Class - The streaming worker main class name.
- Task
Group string - The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- Task
User string - The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- Temp
Storage stringPrefix - The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Vm
Id string - The ID string of the VM.
- Workflow
File stringName - The file to store the workflow in.
- Alsologtostderr bool
- Whether to also send taskrunner log info to stderr.
- Base
Task stringDir - The location on the worker for task-specific subdirectories.
- Base
Url string - The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- Commandlines
File stringName - The file to store preprocessing commands in.
- Continue
On boolException - Whether to continue taskrunner if an exception is hit.
- Dataflow
Api stringVersion - The API version of endpoint, e.g. "v1b3"
- Harness
Command string - The command to launch the worker harness.
- Language
Hint string - The suggested backend language.
- Log
Dir string - The directory on the VM to store logs.
- Log
To boolSerialconsole - Whether to send taskrunner log info to Google Compute Engine VM serial console.
- Log
Upload stringLocation - Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Oauth
Scopes []string - The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- Parallel
Worker WorkerSettings Settings Response - The settings to pass to the parallel worker harness.
- Streaming
Worker stringMain Class - The streaming worker main class name.
- Task
Group string - The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- Task
User string - The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- Temp
Storage stringPrefix - The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Vm
Id string - The ID string of the VM.
- Workflow
File stringName - The file to store the workflow in.
- alsologtostderr Boolean
- Whether to also send taskrunner log info to stderr.
- base
Task StringDir - The location on the worker for task-specific subdirectories.
- base
Url String - The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlines
File StringName - The file to store preprocessing commands in.
- continue
On BooleanException - Whether to continue taskrunner if an exception is hit.
- dataflow
Api StringVersion - The API version of endpoint, e.g. "v1b3"
- harness
Command String - The command to launch the worker harness.
- language
Hint String - The suggested backend language.
- log
Dir String - The directory on the VM to store logs.
- log
To BooleanSerialconsole - Whether to send taskrunner log info to Google Compute Engine VM serial console.
- log
Upload StringLocation - Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauth
Scopes List<String> - The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallel
Worker WorkerSettings Settings Response - The settings to pass to the parallel worker harness.
- streaming
Worker StringMain Class - The streaming worker main class name.
- task
Group String - The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- task
User String - The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- temp
Storage StringPrefix - The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vm
Id String - The ID string of the VM.
- workflow
File StringName - The file to store the workflow in.
- alsologtostderr boolean
- Whether to also send taskrunner log info to stderr.
- base
Task stringDir - The location on the worker for task-specific subdirectories.
- base
Url string - The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlines
File stringName - The file to store preprocessing commands in.
- continue
On booleanException - Whether to continue taskrunner if an exception is hit.
- dataflow
Api stringVersion - The API version of endpoint, e.g. "v1b3"
- harness
Command string - The command to launch the worker harness.
- language
Hint string - The suggested backend language.
- log
Dir string - The directory on the VM to store logs.
- log
To booleanSerialconsole - Whether to send taskrunner log info to Google Compute Engine VM serial console.
- log
Upload stringLocation - Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauth
Scopes string[] - The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallel
Worker WorkerSettings Settings Response - The settings to pass to the parallel worker harness.
- streaming
Worker stringMain Class - The streaming worker main class name.
- task
Group string - The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- task
User string - The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- temp
Storage stringPrefix - The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vm
Id string - The ID string of the VM.
- workflow
File stringName - The file to store the workflow in.
- alsologtostderr bool
- Whether to also send taskrunner log info to stderr.
- base_
task_ strdir - The location on the worker for task-specific subdirectories.
- base_
url str - The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlines_
file_ strname - The file to store preprocessing commands in.
- continue_
on_ boolexception - Whether to continue taskrunner if an exception is hit.
- dataflow_
api_ strversion - The API version of endpoint, e.g. "v1b3"
- harness_
command str - The command to launch the worker harness.
- language_
hint str - The suggested backend language.
- log_
dir str - The directory on the VM to store logs.
- log_
to_ boolserialconsole - Whether to send taskrunner log info to Google Compute Engine VM serial console.
- log_
upload_ strlocation - Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauth_
scopes Sequence[str] - The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallel_
worker_ Workersettings Settings Response - The settings to pass to the parallel worker harness.
- streaming_
worker_ strmain_ class - The streaming worker main class name.
- task_
group str - The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- task_
user str - The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- temp_
storage_ strprefix - The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vm_
id str - The ID string of the VM.
- workflow_
file_ strname - The file to store the workflow in.
- alsologtostderr Boolean
- Whether to also send taskrunner log info to stderr.
- base
Task StringDir - The location on the worker for task-specific subdirectories.
- base
Url String - The base URL for the taskrunner to use when accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- commandlines
File StringName - The file to store preprocessing commands in.
- continue
On BooleanException - Whether to continue taskrunner if an exception is hit.
- dataflow
Api StringVersion - The API version of endpoint, e.g. "v1b3"
- harness
Command String - The command to launch the worker harness.
- language
Hint String - The suggested backend language.
- log
Dir String - The directory on the VM to store logs.
- log
To BooleanSerialconsole - Whether to send taskrunner log info to Google Compute Engine VM serial console.
- log
Upload StringLocation - Indicates where to put logs. If this is not specified, the logs will not be uploaded. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- oauth
Scopes List<String> - The OAuth2 scopes to be requested by the taskrunner in order to access the Cloud Dataflow API.
- parallel
Worker Property MapSettings - The settings to pass to the parallel worker harness.
- streaming
Worker StringMain Class - The streaming worker main class name.
- task
Group String - The UNIX group ID on the worker VM to use for tasks launched by taskrunner; e.g. "wheel".
- task
User String - The UNIX user ID on the worker VM to use for tasks launched by taskrunner; e.g. "root".
- temp
Storage StringPrefix - The prefix of the resources the taskrunner should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- vm
Id String - The ID string of the VM.
- workflow
File StringName - The file to store the workflow in.
TransformSummaryResponse
- Display
Data List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Display Data Response> - Transform-specific display data.
- Input
Collection List<string>Name - User names for all collection inputs to this transform.
- Kind string
- Type of transform.
- Name string
- User provided name for this transform instance.
- Output
Collection List<string>Name - User names for all collection outputs to this transform.
- Display
Data []DisplayData Response - Transform-specific display data.
- Input
Collection []stringName - User names for all collection inputs to this transform.
- Kind string
- Type of transform.
- Name string
- User provided name for this transform instance.
- Output
Collection []stringName - User names for all collection outputs to this transform.
- display
Data List<DisplayData Response> - Transform-specific display data.
- input
Collection List<String>Name - User names for all collection inputs to this transform.
- kind String
- Type of transform.
- name String
- User provided name for this transform instance.
- output
Collection List<String>Name - User names for all collection outputs to this transform.
- display
Data DisplayData Response[] - Transform-specific display data.
- input
Collection string[]Name - User names for all collection inputs to this transform.
- kind string
- Type of transform.
- name string
- User provided name for this transform instance.
- output
Collection string[]Name - User names for all collection outputs to this transform.
- display_
data Sequence[DisplayData Response] - Transform-specific display data.
- input_
collection_ Sequence[str]name - User names for all collection inputs to this transform.
- kind str
- Type of transform.
- name str
- User provided name for this transform instance.
- output_
collection_ Sequence[str]name - User names for all collection outputs to this transform.
- display
Data List<Property Map> - Transform-specific display data.
- input
Collection List<String>Name - User names for all collection inputs to this transform.
- kind String
- Type of transform.
- name String
- User provided name for this transform instance.
- output
Collection List<String>Name - User names for all collection outputs to this transform.
WorkerPoolResponse
- Autoscaling
Settings Pulumi.Google Native. Dataflow. V1b3. Inputs. Autoscaling Settings Response - Settings for autoscaling of this WorkerPool.
- Data
Disks List<Pulumi.Google Native. Dataflow. V1b3. Inputs. Disk Response> - Data disks that are used by a VM in this workflow.
- Default
Package stringSet - The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- Disk
Size intGb - Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- Disk
Source stringImage - Fully qualified source image for disks.
- Disk
Type string - Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- Ip
Configuration string - Configuration for VM IPs.
- Kind string
- The kind of the worker pool; currently only
harness
andshuffle
are supported. - Machine
Type string - Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- Metadata Dictionary<string, string>
- Metadata to set on the Google Compute Engine VMs.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Threads intPer Worker - The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- Num
Workers int - Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- On
Host stringMaintenance - The action to take on host maintenance, as defined by the Google Compute Engine API.
- Packages
List<Pulumi.
Google Native. Dataflow. V1b3. Inputs. Package Response> - Packages to be installed on workers.
- Pool
Args Dictionary<string, string> - Extra arguments for this worker pool.
- Sdk
Harness List<Pulumi.Container Images Google Native. Dataflow. V1b3. Inputs. Sdk Harness Container Image Response> - Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- Taskrunner
Settings Pulumi.Google Native. Dataflow. V1b3. Inputs. Task Runner Settings Response - Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- Teardown
Policy string - Sets the policy for determining when to turndown worker pool. Allowed values are:
TEARDOWN_ALWAYS
,TEARDOWN_ON_SUCCESS
, andTEARDOWN_NEVER
.TEARDOWN_ALWAYS
means workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESS
means workers are torn down if the job succeeds.TEARDOWN_NEVER
means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYS
policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default. - Worker
Harness stringContainer Image - Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- Zone string
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- Autoscaling
Settings AutoscalingSettings Response - Settings for autoscaling of this WorkerPool.
- Data
Disks []DiskResponse - Data disks that are used by a VM in this workflow.
- Default
Package stringSet - The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- Disk
Size intGb - Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- Disk
Source stringImage - Fully qualified source image for disks.
- Disk
Type string - Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- Ip
Configuration string - Configuration for VM IPs.
- Kind string
- The kind of the worker pool; currently only
harness
andshuffle
are supported. - Machine
Type string - Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- Metadata map[string]string
- Metadata to set on the Google Compute Engine VMs.
- Network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- Num
Threads intPer Worker - The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- Num
Workers int - Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- On
Host stringMaintenance - The action to take on host maintenance, as defined by the Google Compute Engine API.
- Packages
[]Package
Response - Packages to be installed on workers.
- Pool
Args map[string]string - Extra arguments for this worker pool.
- Sdk
Harness []SdkContainer Images Harness Container Image Response - Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- Subnetwork string
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- Taskrunner
Settings TaskRunner Settings Response - Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- Teardown
Policy string - Sets the policy for determining when to turndown worker pool. Allowed values are:
TEARDOWN_ALWAYS
,TEARDOWN_ON_SUCCESS
, andTEARDOWN_NEVER
.TEARDOWN_ALWAYS
means workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESS
means workers are torn down if the job succeeds.TEARDOWN_NEVER
means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYS
policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default. - Worker
Harness stringContainer Image - Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- Zone string
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- autoscaling
Settings AutoscalingSettings Response - Settings for autoscaling of this WorkerPool.
- data
Disks List<DiskResponse> - Data disks that are used by a VM in this workflow.
- default
Package StringSet - The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- disk
Size IntegerGb - Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk
Source StringImage - Fully qualified source image for disks.
- disk
Type String - Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ip
Configuration String - Configuration for VM IPs.
- kind String
- The kind of the worker pool; currently only
harness
andshuffle
are supported. - machine
Type String - Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata Map<String,String>
- Metadata to set on the Google Compute Engine VMs.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Threads IntegerPer Worker - The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- num
Workers Integer - Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- on
Host StringMaintenance - The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages
List<Package
Response> - Packages to be installed on workers.
- pool
Args Map<String,String> - Extra arguments for this worker pool.
- sdk
Harness List<SdkContainer Images Harness Container Image Response> - Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunner
Settings TaskRunner Settings Response - Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardown
Policy String - Sets the policy for determining when to turndown worker pool. Allowed values are:
TEARDOWN_ALWAYS
,TEARDOWN_ON_SUCCESS
, andTEARDOWN_NEVER
.TEARDOWN_ALWAYS
means workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESS
means workers are torn down if the job succeeds.TEARDOWN_NEVER
means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYS
policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default. - worker
Harness StringContainer Image - Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- zone String
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- autoscaling
Settings AutoscalingSettings Response - Settings for autoscaling of this WorkerPool.
- data
Disks DiskResponse[] - Data disks that are used by a VM in this workflow.
- default
Package stringSet - The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- disk
Size numberGb - Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk
Source stringImage - Fully qualified source image for disks.
- disk
Type string - Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ip
Configuration string - Configuration for VM IPs.
- kind string
- The kind of the worker pool; currently only
harness
andshuffle
are supported. - machine
Type string - Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata {[key: string]: string}
- Metadata to set on the Google Compute Engine VMs.
- network string
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Threads numberPer Worker - The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- num
Workers number - Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- on
Host stringMaintenance - The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages
Package
Response[] - Packages to be installed on workers.
- pool
Args {[key: string]: string} - Extra arguments for this worker pool.
- sdk
Harness SdkContainer Images Harness Container Image Response[] - Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork string
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunner
Settings TaskRunner Settings Response - Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardown
Policy string - Sets the policy for determining when to turndown worker pool. Allowed values are:
TEARDOWN_ALWAYS
,TEARDOWN_ON_SUCCESS
, andTEARDOWN_NEVER
.TEARDOWN_ALWAYS
means workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESS
means workers are torn down if the job succeeds.TEARDOWN_NEVER
means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYS
policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default. - worker
Harness stringContainer Image - Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- zone string
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- autoscaling_
settings AutoscalingSettings Response - Settings for autoscaling of this WorkerPool.
- data_
disks Sequence[DiskResponse] - Data disks that are used by a VM in this workflow.
- default_
package_ strset - The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- disk_
size_ intgb - Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk_
source_ strimage - Fully qualified source image for disks.
- disk_
type str - Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ip_
configuration str - Configuration for VM IPs.
- kind str
- The kind of the worker pool; currently only
harness
andshuffle
are supported. - machine_
type str - Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata Mapping[str, str]
- Metadata to set on the Google Compute Engine VMs.
- network str
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num_
threads_ intper_ worker - The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- num_
workers int - Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- on_
host_ strmaintenance - The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages
Sequence[Package
Response] - Packages to be installed on workers.
- pool_
args Mapping[str, str] - Extra arguments for this worker pool.
- sdk_
harness_ Sequence[Sdkcontainer_ images Harness Container Image Response] - Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork str
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunner_
settings TaskRunner Settings Response - Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardown_
policy str - Sets the policy for determining when to turndown worker pool. Allowed values are:
TEARDOWN_ALWAYS
,TEARDOWN_ON_SUCCESS
, andTEARDOWN_NEVER
.TEARDOWN_ALWAYS
means workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESS
means workers are torn down if the job succeeds.TEARDOWN_NEVER
means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYS
policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default. - worker_
harness_ strcontainer_ image - Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- zone str
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
- autoscaling
Settings Property Map - Settings for autoscaling of this WorkerPool.
- data
Disks List<Property Map> - Data disks that are used by a VM in this workflow.
- default
Package StringSet - The default package set to install. This allows the service to select a default set of packages which are useful to worker harnesses written in a particular language.
- disk
Size NumberGb - Size of root disk for VMs, in GB. If zero or unspecified, the service will attempt to choose a reasonable default.
- disk
Source StringImage - Fully qualified source image for disks.
- disk
Type String - Type of root disk for VMs. If empty or unspecified, the service will attempt to choose a reasonable default.
- ip
Configuration String - Configuration for VM IPs.
- kind String
- The kind of the worker pool; currently only
harness
andshuffle
are supported. - machine
Type String - Machine type (e.g. "n1-standard-1"). If empty or unspecified, the service will attempt to choose a reasonable default.
- metadata Map<String>
- Metadata to set on the Google Compute Engine VMs.
- network String
- Network to which VMs will be assigned. If empty or unspecified, the service will use the network "default".
- num
Threads NumberPer Worker - The number of threads per worker harness. If empty or unspecified, the service will choose a number of threads (according to the number of cores on the selected machine type for batch, or 1 by convention for streaming).
- num
Workers Number - Number of Google Compute Engine workers in this pool needed to execute the job. If zero or unspecified, the service will attempt to choose a reasonable default.
- on
Host StringMaintenance - The action to take on host maintenance, as defined by the Google Compute Engine API.
- packages List<Property Map>
- Packages to be installed on workers.
- pool
Args Map<String> - Extra arguments for this worker pool.
- sdk
Harness List<Property Map>Container Images - Set of SDK harness containers needed to execute this pipeline. This will only be set in the Fn API path. For non-cross-language pipelines this should have only one entry. Cross-language pipelines will have two or more entries.
- subnetwork String
- Subnetwork to which VMs will be assigned, if desired. Expected to be of the form "regions/REGION/subnetworks/SUBNETWORK".
- taskrunner
Settings Property Map - Settings passed through to Google Compute Engine workers when using the standard Dataflow task runner. Users should ignore this field.
- teardown
Policy String - Sets the policy for determining when to turndown worker pool. Allowed values are:
TEARDOWN_ALWAYS
,TEARDOWN_ON_SUCCESS
, andTEARDOWN_NEVER
.TEARDOWN_ALWAYS
means workers are always torn down regardless of whether the job succeeds.TEARDOWN_ON_SUCCESS
means workers are torn down if the job succeeds.TEARDOWN_NEVER
means the workers are never torn down. If the workers are not torn down by the service, they will continue to run and use Google Compute Engine VM resources in the user's project until they are explicitly terminated by the user. Because of this, Google recommends using theTEARDOWN_ALWAYS
policy except for small, manually supervised test jobs. If unknown or unspecified, the service will attempt to choose a reasonable default. - worker
Harness StringContainer Image - Docker container image that executes the Cloud Dataflow worker harness, residing in Google Container Registry. Deprecated for the Fn API path. Use sdk_harness_container_images instead.
- zone String
- Zone to run the worker pools in. If empty or unspecified, the service will attempt to choose a reasonable default.
WorkerSettingsResponse
- Base
Url string - The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- Reporting
Enabled bool - Whether to send work progress updates to the service.
- Service
Path string - The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- Shuffle
Service stringPath - The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- Temp
Storage stringPrefix - The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Worker
Id string - The ID of the worker running this pipeline.
- Base
Url string - The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- Reporting
Enabled bool - Whether to send work progress updates to the service.
- Service
Path string - The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- Shuffle
Service stringPath - The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- Temp
Storage stringPrefix - The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- Worker
Id string - The ID of the worker running this pipeline.
- base
Url String - The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reporting
Enabled Boolean - Whether to send work progress updates to the service.
- service
Path String - The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffle
Service StringPath - The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- temp
Storage StringPrefix - The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- worker
Id String - The ID of the worker running this pipeline.
- base
Url string - The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reporting
Enabled boolean - Whether to send work progress updates to the service.
- service
Path string - The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffle
Service stringPath - The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- temp
Storage stringPrefix - The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- worker
Id string - The ID of the worker running this pipeline.
- base_
url str - The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reporting_
enabled bool - Whether to send work progress updates to the service.
- service_
path str - The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffle_
service_ strpath - The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- temp_
storage_ strprefix - The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- worker_
id str - The ID of the worker running this pipeline.
- base
Url String - The base URL for accessing Google Cloud APIs. When workers access Google Cloud APIs, they logically do so via relative URLs. If this field is specified, it supplies the base URL to use for resolving these relative URLs. The normative algorithm used is defined by RFC 1808, "Relative Uniform Resource Locators". If not specified, the default value is "http://www.googleapis.com/"
- reporting
Enabled Boolean - Whether to send work progress updates to the service.
- service
Path String - The Cloud Dataflow service path relative to the root URL, for example, "dataflow/v1b3/projects".
- shuffle
Service StringPath - The Shuffle service path relative to the root URL, for example, "shuffle/v1beta1".
- temp
Storage StringPrefix - The prefix of the resources the system should use for temporary storage. The supported resource type is: Google Cloud Storage: storage.googleapis.com/{bucket}/{object} bucket.storage.googleapis.com/{object}
- worker
Id String - The ID of the worker running this pipeline.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.