Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.notebooks/v1.getExecution
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets details of executions
Using getExecution
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getExecution(args: GetExecutionArgs, opts?: InvokeOptions): Promise<GetExecutionResult>
function getExecutionOutput(args: GetExecutionOutputArgs, opts?: InvokeOptions): Output<GetExecutionResult>
def get_execution(execution_id: Optional[str] = None,
location: Optional[str] = None,
project: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetExecutionResult
def get_execution_output(execution_id: Optional[pulumi.Input[str]] = None,
location: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetExecutionResult]
func LookupExecution(ctx *Context, args *LookupExecutionArgs, opts ...InvokeOption) (*LookupExecutionResult, error)
func LookupExecutionOutput(ctx *Context, args *LookupExecutionOutputArgs, opts ...InvokeOption) LookupExecutionResultOutput
> Note: This function is named LookupExecution
in the Go SDK.
public static class GetExecution
{
public static Task<GetExecutionResult> InvokeAsync(GetExecutionArgs args, InvokeOptions? opts = null)
public static Output<GetExecutionResult> Invoke(GetExecutionInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetExecutionResult> getExecution(GetExecutionArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
fn::invoke:
function: google-native:notebooks/v1:getExecution
arguments:
# arguments dictionary
The following arguments are supported:
- Execution
Id string - Location string
- Project string
- Execution
Id string - Location string
- Project string
- execution
Id String - location String
- project String
- execution
Id string - location string
- project string
- execution_
id str - location str
- project str
- execution
Id String - location String
- project String
getExecution Result
The following output properties are available:
- Create
Time string - Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- Display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Execution
Template Pulumi.Google Native. Notebooks. V1. Outputs. Execution Template Response - execute metadata including name, hardware spec, region, labels, etc.
- Job
Uri string - The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- Output
Notebook stringFile - Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- Update
Time string - Time the Execution was last updated.
- Create
Time string - Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- Display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Execution
Template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- Job
Uri string - The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- Output
Notebook stringFile - Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- Update
Time string - Time the Execution was last updated.
- create
Time String - Time the Execution was instantiated.
- description String
- A brief description of this execution.
- display
Name String - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri String - The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook StringFile - Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- update
Time String - Time the Execution was last updated.
- create
Time string - Time the Execution was instantiated.
- description string
- A brief description of this execution.
- display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri string - The URI of the external job used to execute the notebook.
- name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook stringFile - Output notebook file generated by this execution
- state string
- State of the underlying AI Platform job.
- update
Time string - Time the Execution was last updated.
- create_
time str - Time the Execution was instantiated.
- description str
- A brief description of this execution.
- display_
name str - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution_
template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job_
uri str - The URI of the external job used to execute the notebook.
- name str
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output_
notebook_ strfile - Output notebook file generated by this execution
- state str
- State of the underlying AI Platform job.
- update_
time str - Time the Execution was last updated.
- create
Time String - Time the Execution was instantiated.
- description String
- A brief description of this execution.
- display
Name String - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template Property Map - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri String - The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook StringFile - Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- update
Time String - Time the Execution was last updated.
Supporting Types
DataprocParametersResponse
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
ExecutionTemplateResponse
- Accelerator
Config Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters Response - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type string - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters Response - Parameters used in Vertex AI JobType executions.
- Accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type string - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type String - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier String - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type string - The type of Job to be used on this execution.
- kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator_
config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container_
image_ struri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_
parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input_
notebook_ strfile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_
type str - The type of Job to be used on this execution.
- kernel_
spec str - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_
type str - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output_
notebook_ strfolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_
yaml_ strfile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale_
tier str - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service_
account str - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_
ai_ Vertexparameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config Property Map - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters Property Map - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type String - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier String - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai Property MapParameters - Parameters used in Vertex AI JobType executions.
SchedulerAcceleratorConfigResponse
- core_
count str - Count of cores of this accelerator.
- type str
- Type of this accelerator.
VertexAIParametersResponse
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi