Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.notebooks/v1.getSchedule
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets details of schedule
Using getSchedule
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getSchedule(args: GetScheduleArgs, opts?: InvokeOptions): Promise<GetScheduleResult>
function getScheduleOutput(args: GetScheduleOutputArgs, opts?: InvokeOptions): Output<GetScheduleResult>
def get_schedule(location: Optional[str] = None,
project: Optional[str] = None,
schedule_id: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetScheduleResult
def get_schedule_output(location: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
schedule_id: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetScheduleResult]
func LookupSchedule(ctx *Context, args *LookupScheduleArgs, opts ...InvokeOption) (*LookupScheduleResult, error)
func LookupScheduleOutput(ctx *Context, args *LookupScheduleOutputArgs, opts ...InvokeOption) LookupScheduleResultOutput
> Note: This function is named LookupSchedule
in the Go SDK.
public static class GetSchedule
{
public static Task<GetScheduleResult> InvokeAsync(GetScheduleArgs args, InvokeOptions? opts = null)
public static Output<GetScheduleResult> Invoke(GetScheduleInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetScheduleResult> getSchedule(GetScheduleArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
fn::invoke:
function: google-native:notebooks/v1:getSchedule
arguments:
# arguments dictionary
The following arguments are supported:
- Location string
- Schedule
Id string - Project string
- Location string
- Schedule
Id string - Project string
- location String
- schedule
Id String - project String
- location string
- schedule
Id string - project string
- location str
- schedule_
id str - project str
- location String
- schedule
Id String - project String
getSchedule Result
The following output properties are available:
- Create
Time string - Time the schedule was created.
- Cron
Schedule string - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - Description string
- A brief description of this environment.
- Display
Name string - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - Execution
Template Pulumi.Google Native. Notebooks. V1. Outputs. Execution Template Response - Notebook Execution Template corresponding to this schedule.
- Name string
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- Recent
Executions List<Pulumi.Google Native. Notebooks. V1. Outputs. Execution Response> - The most recent execution names triggered from this schedule and their corresponding states.
- State string
- Time
Zone string - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- Update
Time string - Time the schedule was last updated.
- Create
Time string - Time the schedule was created.
- Cron
Schedule string - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - Description string
- A brief description of this environment.
- Display
Name string - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - Execution
Template ExecutionTemplate Response - Notebook Execution Template corresponding to this schedule.
- Name string
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- Recent
Executions []ExecutionResponse - The most recent execution names triggered from this schedule and their corresponding states.
- State string
- Time
Zone string - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- Update
Time string - Time the schedule was last updated.
- create
Time String - Time the schedule was created.
- cron
Schedule String - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description String
- A brief description of this environment.
- display
Name String - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - execution
Template ExecutionTemplate Response - Notebook Execution Template corresponding to this schedule.
- name String
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent
Executions List<ExecutionResponse> - The most recent execution names triggered from this schedule and their corresponding states.
- state String
- time
Zone String - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- update
Time String - Time the schedule was last updated.
- create
Time string - Time the schedule was created.
- cron
Schedule string - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description string
- A brief description of this environment.
- display
Name string - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - execution
Template ExecutionTemplate Response - Notebook Execution Template corresponding to this schedule.
- name string
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent
Executions ExecutionResponse[] - The most recent execution names triggered from this schedule and their corresponding states.
- state string
- time
Zone string - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- update
Time string - Time the schedule was last updated.
- create_
time str - Time the schedule was created.
- cron_
schedule str - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description str
- A brief description of this environment.
- display_
name str - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - execution_
template ExecutionTemplate Response - Notebook Execution Template corresponding to this schedule.
- name str
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent_
executions Sequence[ExecutionResponse] - The most recent execution names triggered from this schedule and their corresponding states.
- state str
- time_
zone str - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- update_
time str - Time the schedule was last updated.
- create
Time String - Time the schedule was created.
- cron
Schedule String - Cron-tab formatted schedule by which the job will execute. Format: minute, hour, day of month, month, day of week, e.g.
0 0 * * WED
= every Wednesday More examples: https://crontab.guru/examples.html - description String
- A brief description of this environment.
- display
Name String - Display name used for UI purposes. Name can only contain alphanumeric characters, hyphens
-
, and underscores_
. - execution
Template Property Map - Notebook Execution Template corresponding to this schedule.
- name String
- The name of this schedule. Format:
projects/{project_id}/locations/{location}/schedules/{schedule_id}
- recent
Executions List<Property Map> - The most recent execution names triggered from this schedule and their corresponding states.
- state String
- time
Zone String - Timezone on which the cron_schedule. The value of this field must be a time zone name from the tz database. TZ Database: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones Note that some time zones include a provision for daylight savings time. The rules for daylight saving time are determined by the chosen tz. For UTC use the string "utc". If a time zone is not specified, the default will be in UTC (also known as GMT).
- update
Time String - Time the schedule was last updated.
Supporting Types
DataprocParametersResponse
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- Cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster string
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster str
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
- cluster String
- URI for cluster used to run Dataproc execution. Format:
projects/{PROJECT_ID}/regions/{REGION}/clusters/{CLUSTER_NAME}
ExecutionResponse
- Create
Time string - Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- Display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Execution
Template Pulumi.Google Native. Notebooks. V1. Inputs. Execution Template Response - execute metadata including name, hardware spec, region, labels, etc.
- Job
Uri string - The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- Output
Notebook stringFile - Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- Update
Time string - Time the Execution was last updated.
- Create
Time string - Time the Execution was instantiated.
- Description string
- A brief description of this execution.
- Display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- Execution
Template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- Job
Uri string - The URI of the external job used to execute the notebook.
- Name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- Output
Notebook stringFile - Output notebook file generated by this execution
- State string
- State of the underlying AI Platform job.
- Update
Time string - Time the Execution was last updated.
- create
Time String - Time the Execution was instantiated.
- description String
- A brief description of this execution.
- display
Name String - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri String - The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook StringFile - Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- update
Time String - Time the Execution was last updated.
- create
Time string - Time the Execution was instantiated.
- description string
- A brief description of this execution.
- display
Name string - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri string - The URI of the external job used to execute the notebook.
- name string
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook stringFile - Output notebook file generated by this execution
- state string
- State of the underlying AI Platform job.
- update
Time string - Time the Execution was last updated.
- create_
time str - Time the Execution was instantiated.
- description str
- A brief description of this execution.
- display_
name str - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution_
template ExecutionTemplate Response - execute metadata including name, hardware spec, region, labels, etc.
- job_
uri str - The URI of the external job used to execute the notebook.
- name str
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output_
notebook_ strfile - Output notebook file generated by this execution
- state str
- State of the underlying AI Platform job.
- update_
time str - Time the Execution was last updated.
- create
Time String - Time the Execution was instantiated.
- description String
- A brief description of this execution.
- display
Name String - Name used for UI purposes. Name can only contain alphanumeric characters and underscores '_'.
- execution
Template Property Map - execute metadata including name, hardware spec, region, labels, etc.
- job
Uri String - The URI of the external job used to execute the notebook.
- name String
- The resource name of the execute. Format:
projects/{project_id}/locations/{location}/executions/{execution_id}
- output
Notebook StringFile - Output notebook file generated by this execution
- state String
- State of the underlying AI Platform job.
- update
Time String - Time the Execution was last updated.
ExecutionTemplateResponse
- Accelerator
Config Pulumi.Google Native. Notebooks. V1. Inputs. Scheduler Accelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters Pulumi.Google Native. Notebooks. V1. Inputs. Dataproc Parameters Response - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type string - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels Dictionary<string, string>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai Pulumi.Parameters Google Native. Notebooks. V1. Inputs. Vertex AIParameters Response - Parameters used in Vertex AI JobType executions.
- Accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- Container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- Dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- Input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- Job
Type string - The type of Job to be used on this execution.
- Kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- Labels map[string]string
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- Master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - Output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- Parameters string
- Parameters used within the 'input_notebook_file' notebook.
- Params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- Scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- Service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - Tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- Vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type String - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String,String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier String - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image stringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input
Notebook stringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type string - The type of Job to be used on this execution.
- kernel
Spec string - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels {[key: string]: string}
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type string - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook stringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters string
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml stringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier string - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account string - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard string
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai VertexParameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator_
config SchedulerAccelerator Config Response - Configuration (count and accelerator type) for hardware running notebook execution.
- container_
image_ struri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc_
parameters DataprocParameters Response - Parameters used in Dataproc JobType executions.
- input_
notebook_ strfile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job_
type str - The type of Job to be used on this execution.
- kernel_
spec str - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Mapping[str, str]
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master_
type str - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output_
notebook_ strfolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters str
- Parameters used within the 'input_notebook_file' notebook.
- params_
yaml_ strfile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale_
tier str - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service_
account str - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard str
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex_
ai_ Vertexparameters AIParameters Response - Parameters used in Vertex AI JobType executions.
- accelerator
Config Property Map - Configuration (count and accelerator type) for hardware running notebook execution.
- container
Image StringUri - Container Image URI to a DLVM Example: 'gcr.io/deeplearning-platform-release/base-cu100' More examples can be found at: https://cloud.google.com/ai-platform/deep-learning-containers/docs/choosing-container
- dataproc
Parameters Property Map - Parameters used in Dataproc JobType executions.
- input
Notebook StringFile - Path to the notebook file to execute. Must be in a Google Cloud Storage bucket. Format:
gs://{bucket_name}/{folder}/{notebook_file_name}
Ex:gs://notebook_user/scheduled_notebooks/sentiment_notebook.ipynb
- job
Type String - The type of Job to be used on this execution.
- kernel
Spec String - Name of the kernel spec to use. This must be specified if the kernel spec name on the execution target does not match the name in the input notebook file.
- labels Map<String>
- Labels for execution. If execution is scheduled, a field included will be 'nbs-scheduled'. Otherwise, it is an immediate execution, and an included field will be 'nbs-immediate'. Use fields to efficiently index between various types of executions.
- master
Type String - Specifies the type of virtual machine to use for your training job's master worker. You must specify this field when
scaleTier
is set toCUSTOM
. You can use certain Compute Engine machine types directly in this field. The following types are supported: -n1-standard-4
-n1-standard-8
-n1-standard-16
-n1-standard-32
-n1-standard-64
-n1-standard-96
-n1-highmem-2
-n1-highmem-4
-n1-highmem-8
-n1-highmem-16
-n1-highmem-32
-n1-highmem-64
-n1-highmem-96
-n1-highcpu-16
-n1-highcpu-32
-n1-highcpu-64
-n1-highcpu-96
Alternatively, you can use the following legacy machine types: -standard
-large_model
-complex_model_s
-complex_model_m
-complex_model_l
-standard_gpu
-complex_model_m_gpu
-complex_model_l_gpu
-standard_p100
-complex_model_m_p100
-standard_v100
-large_model_v100
-complex_model_m_v100
-complex_model_l_v100
Finally, if you want to use a TPU for training, specifycloud_tpu
in this field. Learn more about the special configuration options for training with TPU. - output
Notebook StringFolder - Path to the notebook folder to write to. Must be in a Google Cloud Storage bucket path. Format:
gs://{bucket_name}/{folder}
Ex:gs://notebook_user/scheduled_notebooks
- parameters String
- Parameters used within the 'input_notebook_file' notebook.
- params
Yaml StringFile - Parameters to be overridden in the notebook during execution. Ref https://papermill.readthedocs.io/en/latest/usage-parameterize.html on how to specifying parameters in the input notebook and pass them here in an YAML file. Ex:
gs://notebook_user/scheduled_notebooks/sentiment_notebook_params.yaml
- scale
Tier String - Scale tier of the hardware used for notebook execution. DEPRECATED Will be discontinued. As right now only CUSTOM is supported.
- service
Account String - The email address of a service account to use when running the execution. You must have the
iam.serviceAccounts.actAs
permission for the specified service account. - tensorboard String
- The name of a Vertex AI [Tensorboard] resource to which this execution will upload Tensorboard logs. Format:
projects/{project}/locations/{location}/tensorboards/{tensorboard}
- vertex
Ai Property MapParameters - Parameters used in Vertex AI JobType executions.
SchedulerAcceleratorConfigResponse
- core_
count str - Count of cores of this accelerator.
- type str
- Type of this accelerator.
VertexAIParametersResponse
- Env Dictionary<string, string>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- Env map[string]string
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- Network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String,String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env {[key: string]: string}
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network string
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Mapping[str, str]
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network str
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
- env Map<String>
- Environment variables. At most 100 environment variables can be specified and unique. Example:
GCP_BUCKET=gs://my-bucket/samples/
- network String
- The full name of the Compute Engine network to which the Job should be peered. For example,
projects/12345/global/networks/myVPC
. Format is of the formprojects/{project}/global/networks/{network}
. Where{project}
is a project number, as in12345
, and{network}
is a network name. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi