Google Cloud Native is in preview. Google Cloud Classic is fully supported.
google-native.dataproc/v1.Session
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Create an interactive session asynchronously.
Create Session Resource
Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.
Constructor syntax
new Session(name: string, args: SessionArgs, opts?: CustomResourceOptions);
@overload
def Session(resource_name: str,
args: SessionArgs,
opts: Optional[ResourceOptions] = None)
@overload
def Session(resource_name: str,
opts: Optional[ResourceOptions] = None,
session_id: Optional[str] = None,
environment_config: Optional[EnvironmentConfigArgs] = None,
jupyter_session: Optional[JupyterConfigArgs] = None,
labels: Optional[Mapping[str, str]] = None,
location: Optional[str] = None,
name: Optional[str] = None,
project: Optional[str] = None,
request_id: Optional[str] = None,
runtime_config: Optional[RuntimeConfigArgs] = None,
session_template: Optional[str] = None,
user: Optional[str] = None)
func NewSession(ctx *Context, name string, args SessionArgs, opts ...ResourceOption) (*Session, error)
public Session(string name, SessionArgs args, CustomResourceOptions? opts = null)
public Session(String name, SessionArgs args)
public Session(String name, SessionArgs args, CustomResourceOptions options)
type: google-native:dataproc/v1:Session
properties: # The arguments to resource properties.
options: # Bag of options to control resource's behavior.
Parameters
- name string
- The unique name of the resource.
- args SessionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- resource_name str
- The unique name of the resource.
- args SessionArgs
- The arguments to resource properties.
- opts ResourceOptions
- Bag of options to control resource's behavior.
- ctx Context
- Context object for the current deployment.
- name string
- The unique name of the resource.
- args SessionArgs
- The arguments to resource properties.
- opts ResourceOption
- Bag of options to control resource's behavior.
- name string
- The unique name of the resource.
- args SessionArgs
- The arguments to resource properties.
- opts CustomResourceOptions
- Bag of options to control resource's behavior.
- name String
- The unique name of the resource.
- args SessionArgs
- The arguments to resource properties.
- options CustomResourceOptions
- Bag of options to control resource's behavior.
Constructor example
The following reference example uses placeholder values for all input properties.
var sessionResource = new GoogleNative.Dataproc.V1.Session("sessionResource", new()
{
SessionId = "string",
EnvironmentConfig = new GoogleNative.Dataproc.V1.Inputs.EnvironmentConfigArgs
{
ExecutionConfig = new GoogleNative.Dataproc.V1.Inputs.ExecutionConfigArgs
{
IdleTtl = "string",
KmsKey = "string",
NetworkTags = new[]
{
"string",
},
NetworkUri = "string",
ServiceAccount = "string",
StagingBucket = "string",
SubnetworkUri = "string",
Ttl = "string",
},
PeripheralsConfig = new GoogleNative.Dataproc.V1.Inputs.PeripheralsConfigArgs
{
MetastoreService = "string",
SparkHistoryServerConfig = new GoogleNative.Dataproc.V1.Inputs.SparkHistoryServerConfigArgs
{
DataprocCluster = "string",
},
},
},
JupyterSession = new GoogleNative.Dataproc.V1.Inputs.JupyterConfigArgs
{
DisplayName = "string",
Kernel = GoogleNative.Dataproc.V1.JupyterConfigKernel.KernelUnspecified,
},
Labels =
{
{ "string", "string" },
},
Location = "string",
Name = "string",
Project = "string",
RequestId = "string",
RuntimeConfig = new GoogleNative.Dataproc.V1.Inputs.RuntimeConfigArgs
{
ContainerImage = "string",
Properties =
{
{ "string", "string" },
},
RepositoryConfig = new GoogleNative.Dataproc.V1.Inputs.RepositoryConfigArgs
{
PypiRepositoryConfig = new GoogleNative.Dataproc.V1.Inputs.PyPiRepositoryConfigArgs
{
PypiRepository = "string",
},
},
Version = "string",
},
SessionTemplate = "string",
User = "string",
});
example, err := dataproc.NewSession(ctx, "sessionResource", &dataproc.SessionArgs{
SessionId: pulumi.String("string"),
EnvironmentConfig: &dataproc.EnvironmentConfigArgs{
ExecutionConfig: &dataproc.ExecutionConfigArgs{
IdleTtl: pulumi.String("string"),
KmsKey: pulumi.String("string"),
NetworkTags: pulumi.StringArray{
pulumi.String("string"),
},
NetworkUri: pulumi.String("string"),
ServiceAccount: pulumi.String("string"),
StagingBucket: pulumi.String("string"),
SubnetworkUri: pulumi.String("string"),
Ttl: pulumi.String("string"),
},
PeripheralsConfig: &dataproc.PeripheralsConfigArgs{
MetastoreService: pulumi.String("string"),
SparkHistoryServerConfig: &dataproc.SparkHistoryServerConfigArgs{
DataprocCluster: pulumi.String("string"),
},
},
},
JupyterSession: &dataproc.JupyterConfigArgs{
DisplayName: pulumi.String("string"),
Kernel: dataproc.JupyterConfigKernelKernelUnspecified,
},
Labels: pulumi.StringMap{
"string": pulumi.String("string"),
},
Location: pulumi.String("string"),
Name: pulumi.String("string"),
Project: pulumi.String("string"),
RequestId: pulumi.String("string"),
RuntimeConfig: &dataproc.RuntimeConfigArgs{
ContainerImage: pulumi.String("string"),
Properties: pulumi.StringMap{
"string": pulumi.String("string"),
},
RepositoryConfig: &dataproc.RepositoryConfigArgs{
PypiRepositoryConfig: &dataproc.PyPiRepositoryConfigArgs{
PypiRepository: pulumi.String("string"),
},
},
Version: pulumi.String("string"),
},
SessionTemplate: pulumi.String("string"),
User: pulumi.String("string"),
})
var sessionResource = new Session("sessionResource", SessionArgs.builder()
.sessionId("string")
.environmentConfig(EnvironmentConfigArgs.builder()
.executionConfig(ExecutionConfigArgs.builder()
.idleTtl("string")
.kmsKey("string")
.networkTags("string")
.networkUri("string")
.serviceAccount("string")
.stagingBucket("string")
.subnetworkUri("string")
.ttl("string")
.build())
.peripheralsConfig(PeripheralsConfigArgs.builder()
.metastoreService("string")
.sparkHistoryServerConfig(SparkHistoryServerConfigArgs.builder()
.dataprocCluster("string")
.build())
.build())
.build())
.jupyterSession(JupyterConfigArgs.builder()
.displayName("string")
.kernel("KERNEL_UNSPECIFIED")
.build())
.labels(Map.of("string", "string"))
.location("string")
.name("string")
.project("string")
.requestId("string")
.runtimeConfig(RuntimeConfigArgs.builder()
.containerImage("string")
.properties(Map.of("string", "string"))
.repositoryConfig(RepositoryConfigArgs.builder()
.pypiRepositoryConfig(PyPiRepositoryConfigArgs.builder()
.pypiRepository("string")
.build())
.build())
.version("string")
.build())
.sessionTemplate("string")
.user("string")
.build());
session_resource = google_native.dataproc.v1.Session("sessionResource",
session_id="string",
environment_config=google_native.dataproc.v1.EnvironmentConfigArgs(
execution_config=google_native.dataproc.v1.ExecutionConfigArgs(
idle_ttl="string",
kms_key="string",
network_tags=["string"],
network_uri="string",
service_account="string",
staging_bucket="string",
subnetwork_uri="string",
ttl="string",
),
peripherals_config=google_native.dataproc.v1.PeripheralsConfigArgs(
metastore_service="string",
spark_history_server_config=google_native.dataproc.v1.SparkHistoryServerConfigArgs(
dataproc_cluster="string",
),
),
),
jupyter_session=google_native.dataproc.v1.JupyterConfigArgs(
display_name="string",
kernel=google_native.dataproc.v1.JupyterConfigKernel.KERNEL_UNSPECIFIED,
),
labels={
"string": "string",
},
location="string",
name="string",
project="string",
request_id="string",
runtime_config=google_native.dataproc.v1.RuntimeConfigArgs(
container_image="string",
properties={
"string": "string",
},
repository_config=google_native.dataproc.v1.RepositoryConfigArgs(
pypi_repository_config=google_native.dataproc.v1.PyPiRepositoryConfigArgs(
pypi_repository="string",
),
),
version="string",
),
session_template="string",
user="string")
const sessionResource = new google_native.dataproc.v1.Session("sessionResource", {
sessionId: "string",
environmentConfig: {
executionConfig: {
idleTtl: "string",
kmsKey: "string",
networkTags: ["string"],
networkUri: "string",
serviceAccount: "string",
stagingBucket: "string",
subnetworkUri: "string",
ttl: "string",
},
peripheralsConfig: {
metastoreService: "string",
sparkHistoryServerConfig: {
dataprocCluster: "string",
},
},
},
jupyterSession: {
displayName: "string",
kernel: google_native.dataproc.v1.JupyterConfigKernel.KernelUnspecified,
},
labels: {
string: "string",
},
location: "string",
name: "string",
project: "string",
requestId: "string",
runtimeConfig: {
containerImage: "string",
properties: {
string: "string",
},
repositoryConfig: {
pypiRepositoryConfig: {
pypiRepository: "string",
},
},
version: "string",
},
sessionTemplate: "string",
user: "string",
});
type: google-native:dataproc/v1:Session
properties:
environmentConfig:
executionConfig:
idleTtl: string
kmsKey: string
networkTags:
- string
networkUri: string
serviceAccount: string
stagingBucket: string
subnetworkUri: string
ttl: string
peripheralsConfig:
metastoreService: string
sparkHistoryServerConfig:
dataprocCluster: string
jupyterSession:
displayName: string
kernel: KERNEL_UNSPECIFIED
labels:
string: string
location: string
name: string
project: string
requestId: string
runtimeConfig:
containerImage: string
properties:
string: string
repositoryConfig:
pypiRepositoryConfig:
pypiRepository: string
version: string
sessionId: string
sessionTemplate: string
user: string
Session Resource Properties
To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.
Inputs
The Session resource accepts the following input properties:
- Session
Id string - Required. The ID to use for the session, which becomes the final component of the session's resource name.This value must be 4-63 characters. Valid characters are /a-z-/.
- Environment
Config Pulumi.Google Native. Dataproc. V1. Inputs. Environment Config - Optional. Environment configuration for the session execution.
- Jupyter
Session Pulumi.Google Native. Dataproc. V1. Inputs. Jupyter Config - Optional. Jupyter session config.
- Labels Dictionary<string, string>
- Optional. The labels to associate with the session. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
- Location string
- Name string
- The resource name of the session.
- Project string
- Request
Id string - Optional. A unique ID used to identify the request. If the service receives two CreateSessionRequests (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateSessionRequest)s with the same ID, the second request is ignored, and the first Session is created and stored in the backend.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Runtime
Config Pulumi.Google Native. Dataproc. V1. Inputs. Runtime Config - Optional. Runtime configuration for the session execution.
- Session
Template string - Optional. The session template used by the session.Only resource names, including project ID and location, are valid.Example: * https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id] * projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id]The template must be in the same project and Dataproc region as the session.
- User string
- Optional. The email address of the user who owns the session.
- Session
Id string - Required. The ID to use for the session, which becomes the final component of the session's resource name.This value must be 4-63 characters. Valid characters are /a-z-/.
- Environment
Config EnvironmentConfig Args - Optional. Environment configuration for the session execution.
- Jupyter
Session JupyterConfig Args - Optional. Jupyter session config.
- Labels map[string]string
- Optional. The labels to associate with the session. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
- Location string
- Name string
- The resource name of the session.
- Project string
- Request
Id string - Optional. A unique ID used to identify the request. If the service receives two CreateSessionRequests (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateSessionRequest)s with the same ID, the second request is ignored, and the first Session is created and stored in the backend.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- Runtime
Config RuntimeConfig Args - Optional. Runtime configuration for the session execution.
- Session
Template string - Optional. The session template used by the session.Only resource names, including project ID and location, are valid.Example: * https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id] * projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id]The template must be in the same project and Dataproc region as the session.
- User string
- Optional. The email address of the user who owns the session.
- session
Id String - Required. The ID to use for the session, which becomes the final component of the session's resource name.This value must be 4-63 characters. Valid characters are /a-z-/.
- environment
Config EnvironmentConfig - Optional. Environment configuration for the session execution.
- jupyter
Session JupyterConfig - Optional. Jupyter session config.
- labels Map<String,String>
- Optional. The labels to associate with the session. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
- location String
- name String
- The resource name of the session.
- project String
- request
Id String - Optional. A unique ID used to identify the request. If the service receives two CreateSessionRequests (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateSessionRequest)s with the same ID, the second request is ignored, and the first Session is created and stored in the backend.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- runtime
Config RuntimeConfig - Optional. Runtime configuration for the session execution.
- session
Template String - Optional. The session template used by the session.Only resource names, including project ID and location, are valid.Example: * https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id] * projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id]The template must be in the same project and Dataproc region as the session.
- user String
- Optional. The email address of the user who owns the session.
- session
Id string - Required. The ID to use for the session, which becomes the final component of the session's resource name.This value must be 4-63 characters. Valid characters are /a-z-/.
- environment
Config EnvironmentConfig - Optional. Environment configuration for the session execution.
- jupyter
Session JupyterConfig - Optional. Jupyter session config.
- labels {[key: string]: string}
- Optional. The labels to associate with the session. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
- location string
- name string
- The resource name of the session.
- project string
- request
Id string - Optional. A unique ID used to identify the request. If the service receives two CreateSessionRequests (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateSessionRequest)s with the same ID, the second request is ignored, and the first Session is created and stored in the backend.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- runtime
Config RuntimeConfig - Optional. Runtime configuration for the session execution.
- session
Template string - Optional. The session template used by the session.Only resource names, including project ID and location, are valid.Example: * https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id] * projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id]The template must be in the same project and Dataproc region as the session.
- user string
- Optional. The email address of the user who owns the session.
- session_
id str - Required. The ID to use for the session, which becomes the final component of the session's resource name.This value must be 4-63 characters. Valid characters are /a-z-/.
- environment_
config EnvironmentConfig Args - Optional. Environment configuration for the session execution.
- jupyter_
session JupyterConfig Args - Optional. Jupyter session config.
- labels Mapping[str, str]
- Optional. The labels to associate with the session. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
- location str
- name str
- The resource name of the session.
- project str
- request_
id str - Optional. A unique ID used to identify the request. If the service receives two CreateSessionRequests (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateSessionRequest)s with the same ID, the second request is ignored, and the first Session is created and stored in the backend.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- runtime_
config RuntimeConfig Args - Optional. Runtime configuration for the session execution.
- session_
template str - Optional. The session template used by the session.Only resource names, including project ID and location, are valid.Example: * https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id] * projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id]The template must be in the same project and Dataproc region as the session.
- user str
- Optional. The email address of the user who owns the session.
- session
Id String - Required. The ID to use for the session, which becomes the final component of the session's resource name.This value must be 4-63 characters. Valid characters are /a-z-/.
- environment
Config Property Map - Optional. Environment configuration for the session execution.
- jupyter
Session Property Map - Optional. Jupyter session config.
- labels Map<String>
- Optional. The labels to associate with the session. Label keys must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035 (https://www.ietf.org/rfc/rfc1035.txt). No more than 32 labels can be associated with a session.
- location String
- name String
- The resource name of the session.
- project String
- request
Id String - Optional. A unique ID used to identify the request. If the service receives two CreateSessionRequests (https://cloud.google.com/dataproc/docs/reference/rpc/google.cloud.dataproc.v1#google.cloud.dataproc.v1.CreateSessionRequest)s with the same ID, the second request is ignored, and the first Session is created and stored in the backend.Recommendation: Set this value to a UUID (https://en.wikipedia.org/wiki/Universally_unique_identifier).The value must contain only letters (a-z, A-Z), numbers (0-9), underscores (_), and hyphens (-). The maximum length is 40 characters.
- runtime
Config Property Map - Optional. Runtime configuration for the session execution.
- session
Template String - Optional. The session template used by the session.Only resource names, including project ID and location, are valid.Example: * https://www.googleapis.com/compute/v1/projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id] * projects/[project_id]/locations/[dataproc_region]/sessionTemplates/[template_id]The template must be in the same project and Dataproc region as the session.
- user String
- Optional. The email address of the user who owns the session.
Outputs
All input properties are implicitly available as output properties. Additionally, the Session resource produces the following output properties:
- Create
Time string - The time when the session was created.
- Creator string
- The email address of the user who created the session.
- Id string
- The provider-assigned unique ID for this managed resource.
- Runtime
Info Pulumi.Google Native. Dataproc. V1. Outputs. Runtime Info Response - Runtime information about session execution.
- State string
- A state of the session.
- State
History List<Pulumi.Google Native. Dataproc. V1. Outputs. Session State History Response> - Historical state information for the session.
- State
Message string - Session state details, such as the failure description if the state is FAILED.
- State
Time string - The time when the session entered the current state.
- Uuid string
- A session UUID (Unique Universal Identifier). The service generates this value when it creates the session.
- Create
Time string - The time when the session was created.
- Creator string
- The email address of the user who created the session.
- Id string
- The provider-assigned unique ID for this managed resource.
- Runtime
Info RuntimeInfo Response - Runtime information about session execution.
- State string
- A state of the session.
- State
History []SessionState History Response - Historical state information for the session.
- State
Message string - Session state details, such as the failure description if the state is FAILED.
- State
Time string - The time when the session entered the current state.
- Uuid string
- A session UUID (Unique Universal Identifier). The service generates this value when it creates the session.
- create
Time String - The time when the session was created.
- creator String
- The email address of the user who created the session.
- id String
- The provider-assigned unique ID for this managed resource.
- runtime
Info RuntimeInfo Response - Runtime information about session execution.
- state String
- A state of the session.
- state
History List<SessionState History Response> - Historical state information for the session.
- state
Message String - Session state details, such as the failure description if the state is FAILED.
- state
Time String - The time when the session entered the current state.
- uuid String
- A session UUID (Unique Universal Identifier). The service generates this value when it creates the session.
- create
Time string - The time when the session was created.
- creator string
- The email address of the user who created the session.
- id string
- The provider-assigned unique ID for this managed resource.
- runtime
Info RuntimeInfo Response - Runtime information about session execution.
- state string
- A state of the session.
- state
History SessionState History Response[] - Historical state information for the session.
- state
Message string - Session state details, such as the failure description if the state is FAILED.
- state
Time string - The time when the session entered the current state.
- uuid string
- A session UUID (Unique Universal Identifier). The service generates this value when it creates the session.
- create_
time str - The time when the session was created.
- creator str
- The email address of the user who created the session.
- id str
- The provider-assigned unique ID for this managed resource.
- runtime_
info RuntimeInfo Response - Runtime information about session execution.
- state str
- A state of the session.
- state_
history Sequence[SessionState History Response] - Historical state information for the session.
- state_
message str - Session state details, such as the failure description if the state is FAILED.
- state_
time str - The time when the session entered the current state.
- uuid str
- A session UUID (Unique Universal Identifier). The service generates this value when it creates the session.
- create
Time String - The time when the session was created.
- creator String
- The email address of the user who created the session.
- id String
- The provider-assigned unique ID for this managed resource.
- runtime
Info Property Map - Runtime information about session execution.
- state String
- A state of the session.
- state
History List<Property Map> - Historical state information for the session.
- state
Message String - Session state details, such as the failure description if the state is FAILED.
- state
Time String - The time when the session entered the current state.
- uuid String
- A session UUID (Unique Universal Identifier). The service generates this value when it creates the session.
Supporting Types
EnvironmentConfig, EnvironmentConfigArgs
- Execution
Config Pulumi.Google Native. Dataproc. V1. Inputs. Execution Config - Optional. Execution configuration for a workload.
- Peripherals
Config Pulumi.Google Native. Dataproc. V1. Inputs. Peripherals Config - Optional. Peripherals configuration that workload has access to.
- Execution
Config ExecutionConfig - Optional. Execution configuration for a workload.
- Peripherals
Config PeripheralsConfig - Optional. Peripherals configuration that workload has access to.
- execution
Config ExecutionConfig - Optional. Execution configuration for a workload.
- peripherals
Config PeripheralsConfig - Optional. Peripherals configuration that workload has access to.
- execution
Config ExecutionConfig - Optional. Execution configuration for a workload.
- peripherals
Config PeripheralsConfig - Optional. Peripherals configuration that workload has access to.
- execution_
config ExecutionConfig - Optional. Execution configuration for a workload.
- peripherals_
config PeripheralsConfig - Optional. Peripherals configuration that workload has access to.
- execution
Config Property Map - Optional. Execution configuration for a workload.
- peripherals
Config Property Map - Optional. Peripherals configuration that workload has access to.
EnvironmentConfigResponse, EnvironmentConfigResponseArgs
- Execution
Config Pulumi.Google Native. Dataproc. V1. Inputs. Execution Config Response - Optional. Execution configuration for a workload.
- Peripherals
Config Pulumi.Google Native. Dataproc. V1. Inputs. Peripherals Config Response - Optional. Peripherals configuration that workload has access to.
- Execution
Config ExecutionConfig Response - Optional. Execution configuration for a workload.
- Peripherals
Config PeripheralsConfig Response - Optional. Peripherals configuration that workload has access to.
- execution
Config ExecutionConfig Response - Optional. Execution configuration for a workload.
- peripherals
Config PeripheralsConfig Response - Optional. Peripherals configuration that workload has access to.
- execution
Config ExecutionConfig Response - Optional. Execution configuration for a workload.
- peripherals
Config PeripheralsConfig Response - Optional. Peripherals configuration that workload has access to.
- execution_
config ExecutionConfig Response - Optional. Execution configuration for a workload.
- peripherals_
config PeripheralsConfig Response - Optional. Peripherals configuration that workload has access to.
- execution
Config Property Map - Optional. Execution configuration for a workload.
- peripherals
Config Property Map - Optional. Peripherals configuration that workload has access to.
ExecutionConfig, ExecutionConfigArgs
- Idle
Ttl string - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- Kms
Key string - Optional. The Cloud KMS key to use for encryption.
- List<string>
- Optional. Tags used for network traffic control.
- Network
Uri string - Optional. Network URI to connect workload to.
- Service
Account string - Optional. Service account that used to execute workload.
- Staging
Bucket string - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Subnetwork
Uri string - Optional. Subnetwork URI to connect workload to.
- Ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- Idle
Ttl string - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- Kms
Key string - Optional. The Cloud KMS key to use for encryption.
- []string
- Optional. Tags used for network traffic control.
- Network
Uri string - Optional. Network URI to connect workload to.
- Service
Account string - Optional. Service account that used to execute workload.
- Staging
Bucket string - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Subnetwork
Uri string - Optional. Subnetwork URI to connect workload to.
- Ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle
Ttl String - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms
Key String - Optional. The Cloud KMS key to use for encryption.
- List<String>
- Optional. Tags used for network traffic control.
- network
Uri String - Optional. Network URI to connect workload to.
- service
Account String - Optional. Service account that used to execute workload.
- staging
Bucket String - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork
Uri String - Optional. Subnetwork URI to connect workload to.
- ttl String
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle
Ttl string - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms
Key string - Optional. The Cloud KMS key to use for encryption.
- string[]
- Optional. Tags used for network traffic control.
- network
Uri string - Optional. Network URI to connect workload to.
- service
Account string - Optional. Service account that used to execute workload.
- staging
Bucket string - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork
Uri string - Optional. Subnetwork URI to connect workload to.
- ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle_
ttl str - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms_
key str - Optional. The Cloud KMS key to use for encryption.
- Sequence[str]
- Optional. Tags used for network traffic control.
- network_
uri str - Optional. Network URI to connect workload to.
- service_
account str - Optional. Service account that used to execute workload.
- staging_
bucket str - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork_
uri str - Optional. Subnetwork URI to connect workload to.
- ttl str
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle
Ttl String - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms
Key String - Optional. The Cloud KMS key to use for encryption.
- List<String>
- Optional. Tags used for network traffic control.
- network
Uri String - Optional. Network URI to connect workload to.
- service
Account String - Optional. Service account that used to execute workload.
- staging
Bucket String - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork
Uri String - Optional. Subnetwork URI to connect workload to.
- ttl String
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
ExecutionConfigResponse, ExecutionConfigResponseArgs
- Idle
Ttl string - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- Kms
Key string - Optional. The Cloud KMS key to use for encryption.
- List<string>
- Optional. Tags used for network traffic control.
- Network
Uri string - Optional. Network URI to connect workload to.
- Service
Account string - Optional. Service account that used to execute workload.
- Staging
Bucket string - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Subnetwork
Uri string - Optional. Subnetwork URI to connect workload to.
- Ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- Idle
Ttl string - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- Kms
Key string - Optional. The Cloud KMS key to use for encryption.
- []string
- Optional. Tags used for network traffic control.
- Network
Uri string - Optional. Network URI to connect workload to.
- Service
Account string - Optional. Service account that used to execute workload.
- Staging
Bucket string - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- Subnetwork
Uri string - Optional. Subnetwork URI to connect workload to.
- Ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle
Ttl String - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms
Key String - Optional. The Cloud KMS key to use for encryption.
- List<String>
- Optional. Tags used for network traffic control.
- network
Uri String - Optional. Network URI to connect workload to.
- service
Account String - Optional. Service account that used to execute workload.
- staging
Bucket String - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork
Uri String - Optional. Subnetwork URI to connect workload to.
- ttl String
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle
Ttl string - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms
Key string - Optional. The Cloud KMS key to use for encryption.
- string[]
- Optional. Tags used for network traffic control.
- network
Uri string - Optional. Network URI to connect workload to.
- service
Account string - Optional. Service account that used to execute workload.
- staging
Bucket string - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork
Uri string - Optional. Subnetwork URI to connect workload to.
- ttl string
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle_
ttl str - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms_
key str - Optional. The Cloud KMS key to use for encryption.
- Sequence[str]
- Optional. Tags used for network traffic control.
- network_
uri str - Optional. Network URI to connect workload to.
- service_
account str - Optional. Service account that used to execute workload.
- staging_
bucket str - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork_
uri str - Optional. Subnetwork URI to connect workload to.
- ttl str
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- idle
Ttl String - Optional. Applies to sessions only. The duration to keep the session alive while it's idling. Exceeding this threshold causes the session to terminate. This field cannot be set on a batch workload. Minimum value is 10 minutes; maximum value is 14 days (see JSON representation of Duration (https://developers.google.com/protocol-buffers/docs/proto3#json)). Defaults to 1 hour if not set. If both ttl and idle_ttl are specified for an interactive session, the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
- kms
Key String - Optional. The Cloud KMS key to use for encryption.
- List<String>
- Optional. Tags used for network traffic control.
- network
Uri String - Optional. Network URI to connect workload to.
- service
Account String - Optional. Service account that used to execute workload.
- staging
Bucket String - Optional. A Cloud Storage bucket used to stage workload dependencies, config files, and store workload output and other ephemeral data, such as Spark history files. If you do not specify a staging bucket, Cloud Dataproc will determine a Cloud Storage location according to the region where your workload is running, and then create and manage project-level, per-location staging and temporary buckets. This field requires a Cloud Storage bucket name, not a gs://... URI to a Cloud Storage bucket.
- subnetwork
Uri String - Optional. Subnetwork URI to connect workload to.
- ttl String
- Optional. The duration after which the workload will be terminated, specified as the JSON representation for Duration (https://protobuf.dev/programming-guides/proto3/#json). When the workload exceeds this duration, it will be unconditionally terminated without waiting for ongoing work to finish. If ttl is not specified for a batch workload, the workload will be allowed to run until it exits naturally (or run forever without exiting). If ttl is not specified for an interactive session, it defaults to 24 hours. If ttl is not specified for a batch that uses 2.1+ runtime version, it defaults to 4 hours. Minimum value is 10 minutes; maximum value is 14 days. If both ttl and idle_ttl are specified (for an interactive session), the conditions are treated as OR conditions: the workload will be terminated when it has been idle for idle_ttl or when ttl has been exceeded, whichever occurs first.
JupyterConfig, JupyterConfigArgs
- Display
Name string - Optional. Display name, shown in the Jupyter kernelspec card.
- Kernel
Pulumi.
Google Native. Dataproc. V1. Jupyter Config Kernel - Optional. Kernel
- Display
Name string - Optional. Display name, shown in the Jupyter kernelspec card.
- Kernel
Jupyter
Config Kernel - Optional. Kernel
- display
Name String - Optional. Display name, shown in the Jupyter kernelspec card.
- kernel
Jupyter
Config Kernel - Optional. Kernel
- display
Name string - Optional. Display name, shown in the Jupyter kernelspec card.
- kernel
Jupyter
Config Kernel - Optional. Kernel
- display_
name str - Optional. Display name, shown in the Jupyter kernelspec card.
- kernel
Jupyter
Config Kernel - Optional. Kernel
- display
Name String - Optional. Display name, shown in the Jupyter kernelspec card.
- kernel "KERNEL_UNSPECIFIED" | "PYTHON" | "SCALA"
- Optional. Kernel
JupyterConfigKernel, JupyterConfigKernelArgs
- Kernel
Unspecified - KERNEL_UNSPECIFIEDThe kernel is unknown.
- Python
- PYTHONPython kernel.
- Scala
- SCALAScala kernel.
- Jupyter
Config Kernel Kernel Unspecified - KERNEL_UNSPECIFIEDThe kernel is unknown.
- Jupyter
Config Kernel Python - PYTHONPython kernel.
- Jupyter
Config Kernel Scala - SCALAScala kernel.
- Kernel
Unspecified - KERNEL_UNSPECIFIEDThe kernel is unknown.
- Python
- PYTHONPython kernel.
- Scala
- SCALAScala kernel.
- Kernel
Unspecified - KERNEL_UNSPECIFIEDThe kernel is unknown.
- Python
- PYTHONPython kernel.
- Scala
- SCALAScala kernel.
- KERNEL_UNSPECIFIED
- KERNEL_UNSPECIFIEDThe kernel is unknown.
- PYTHON
- PYTHONPython kernel.
- SCALA
- SCALAScala kernel.
- "KERNEL_UNSPECIFIED"
- KERNEL_UNSPECIFIEDThe kernel is unknown.
- "PYTHON"
- PYTHONPython kernel.
- "SCALA"
- SCALAScala kernel.
JupyterConfigResponse, JupyterConfigResponseArgs
- Display
Name string - Optional. Display name, shown in the Jupyter kernelspec card.
- Kernel string
- Optional. Kernel
- Display
Name string - Optional. Display name, shown in the Jupyter kernelspec card.
- Kernel string
- Optional. Kernel
- display
Name String - Optional. Display name, shown in the Jupyter kernelspec card.
- kernel String
- Optional. Kernel
- display
Name string - Optional. Display name, shown in the Jupyter kernelspec card.
- kernel string
- Optional. Kernel
- display_
name str - Optional. Display name, shown in the Jupyter kernelspec card.
- kernel str
- Optional. Kernel
- display
Name String - Optional. Display name, shown in the Jupyter kernelspec card.
- kernel String
- Optional. Kernel
PeripheralsConfig, PeripheralsConfigArgs
- Metastore
Service string - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- Spark
History Pulumi.Server Config Google Native. Dataproc. V1. Inputs. Spark History Server Config - Optional. The Spark History Server configuration for the workload.
- Metastore
Service string - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- Spark
History SparkServer Config History Server Config - Optional. The Spark History Server configuration for the workload.
- metastore
Service String - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark
History SparkServer Config History Server Config - Optional. The Spark History Server configuration for the workload.
- metastore
Service string - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark
History SparkServer Config History Server Config - Optional. The Spark History Server configuration for the workload.
- metastore_
service str - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark_
history_ Sparkserver_ config History Server Config - Optional. The Spark History Server configuration for the workload.
- metastore
Service String - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark
History Property MapServer Config - Optional. The Spark History Server configuration for the workload.
PeripheralsConfigResponse, PeripheralsConfigResponseArgs
- Metastore
Service string - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- Spark
History Pulumi.Server Config Google Native. Dataproc. V1. Inputs. Spark History Server Config Response - Optional. The Spark History Server configuration for the workload.
- Metastore
Service string - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- Spark
History SparkServer Config History Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore
Service String - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark
History SparkServer Config History Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore
Service string - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark
History SparkServer Config History Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore_
service str - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark_
history_ Sparkserver_ config History Server Config Response - Optional. The Spark History Server configuration for the workload.
- metastore
Service String - Optional. Resource name of an existing Dataproc Metastore service.Example: projects/[project_id]/locations/[region]/services/[service_id]
- spark
History Property MapServer Config - Optional. The Spark History Server configuration for the workload.
PyPiRepositoryConfig, PyPiRepositoryConfigArgs
- Pypi
Repository string - Optional. PyPi repository address
- Pypi
Repository string - Optional. PyPi repository address
- pypi
Repository String - Optional. PyPi repository address
- pypi
Repository string - Optional. PyPi repository address
- pypi_
repository str - Optional. PyPi repository address
- pypi
Repository String - Optional. PyPi repository address
PyPiRepositoryConfigResponse, PyPiRepositoryConfigResponseArgs
- Pypi
Repository string - Optional. PyPi repository address
- Pypi
Repository string - Optional. PyPi repository address
- pypi
Repository String - Optional. PyPi repository address
- pypi
Repository string - Optional. PyPi repository address
- pypi_
repository str - Optional. PyPi repository address
- pypi
Repository String - Optional. PyPi repository address
RepositoryConfig, RepositoryConfigArgs
- Pypi
Repository Pulumi.Config Google Native. Dataproc. V1. Inputs. Py Pi Repository Config - Optional. Configuration for PyPi repository.
- Pypi
Repository PyConfig Pi Repository Config - Optional. Configuration for PyPi repository.
- pypi
Repository PyConfig Pi Repository Config - Optional. Configuration for PyPi repository.
- pypi
Repository PyConfig Pi Repository Config - Optional. Configuration for PyPi repository.
- pypi_
repository_ Pyconfig Pi Repository Config - Optional. Configuration for PyPi repository.
- pypi
Repository Property MapConfig - Optional. Configuration for PyPi repository.
RepositoryConfigResponse, RepositoryConfigResponseArgs
- Pypi
Repository Pulumi.Config Google Native. Dataproc. V1. Inputs. Py Pi Repository Config Response - Optional. Configuration for PyPi repository.
- Pypi
Repository PyConfig Pi Repository Config Response - Optional. Configuration for PyPi repository.
- pypi
Repository PyConfig Pi Repository Config Response - Optional. Configuration for PyPi repository.
- pypi
Repository PyConfig Pi Repository Config Response - Optional. Configuration for PyPi repository.
- pypi_
repository_ Pyconfig Pi Repository Config Response - Optional. Configuration for PyPi repository.
- pypi
Repository Property MapConfig - Optional. Configuration for PyPi repository.
RuntimeConfig, RuntimeConfigArgs
- Container
Image string - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- Repository
Config Pulumi.Google Native. Dataproc. V1. Inputs. Repository Config - Optional. Dependency repository configuration.
- Version string
- Optional. Version of the batch runtime.
- Container
Image string - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- Properties map[string]string
- Optional. A mapping of property names to values, which are used to configure workload execution.
- Repository
Config RepositoryConfig - Optional. Dependency repository configuration.
- Version string
- Optional. Version of the batch runtime.
- container
Image String - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Map<String,String>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repository
Config RepositoryConfig - Optional. Dependency repository configuration.
- version String
- Optional. Version of the batch runtime.
- container
Image string - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repository
Config RepositoryConfig - Optional. Dependency repository configuration.
- version string
- Optional. Version of the batch runtime.
- container_
image str - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repository_
config RepositoryConfig - Optional. Dependency repository configuration.
- version str
- Optional. Version of the batch runtime.
- container
Image String - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Map<String>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repository
Config Property Map - Optional. Dependency repository configuration.
- version String
- Optional. Version of the batch runtime.
RuntimeConfigResponse, RuntimeConfigResponseArgs
- Container
Image string - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- Properties Dictionary<string, string>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- Repository
Config Pulumi.Google Native. Dataproc. V1. Inputs. Repository Config Response - Optional. Dependency repository configuration.
- Version string
- Optional. Version of the batch runtime.
- Container
Image string - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- Properties map[string]string
- Optional. A mapping of property names to values, which are used to configure workload execution.
- Repository
Config RepositoryConfig Response - Optional. Dependency repository configuration.
- Version string
- Optional. Version of the batch runtime.
- container
Image String - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Map<String,String>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repository
Config RepositoryConfig Response - Optional. Dependency repository configuration.
- version String
- Optional. Version of the batch runtime.
- container
Image string - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties {[key: string]: string}
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repository
Config RepositoryConfig Response - Optional. Dependency repository configuration.
- version string
- Optional. Version of the batch runtime.
- container_
image str - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Mapping[str, str]
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repository_
config RepositoryConfig Response - Optional. Dependency repository configuration.
- version str
- Optional. Version of the batch runtime.
- container
Image String - Optional. Optional custom container image for the job runtime environment. If not specified, a default container image will be used.
- properties Map<String>
- Optional. A mapping of property names to values, which are used to configure workload execution.
- repository
Config Property Map - Optional. Dependency repository configuration.
- version String
- Optional. Version of the batch runtime.
RuntimeInfoResponse, RuntimeInfoResponseArgs
- Approximate
Usage Pulumi.Google Native. Dataproc. V1. Inputs. Usage Metrics Response - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- Current
Usage Pulumi.Google Native. Dataproc. V1. Inputs. Usage Snapshot Response - Snapshot of current workload resource usage.
- Diagnostic
Output stringUri - A URI pointing to the location of the diagnostics tarball.
- Endpoints Dictionary<string, string>
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- Output
Uri string - A URI pointing to the location of the stdout and stderr of the workload.
- Approximate
Usage UsageMetrics Response - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- Current
Usage UsageSnapshot Response - Snapshot of current workload resource usage.
- Diagnostic
Output stringUri - A URI pointing to the location of the diagnostics tarball.
- Endpoints map[string]string
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- Output
Uri string - A URI pointing to the location of the stdout and stderr of the workload.
- approximate
Usage UsageMetrics Response - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- current
Usage UsageSnapshot Response - Snapshot of current workload resource usage.
- diagnostic
Output StringUri - A URI pointing to the location of the diagnostics tarball.
- endpoints Map<String,String>
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- output
Uri String - A URI pointing to the location of the stdout and stderr of the workload.
- approximate
Usage UsageMetrics Response - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- current
Usage UsageSnapshot Response - Snapshot of current workload resource usage.
- diagnostic
Output stringUri - A URI pointing to the location of the diagnostics tarball.
- endpoints {[key: string]: string}
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- output
Uri string - A URI pointing to the location of the stdout and stderr of the workload.
- approximate_
usage UsageMetrics Response - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- current_
usage UsageSnapshot Response - Snapshot of current workload resource usage.
- diagnostic_
output_ struri - A URI pointing to the location of the diagnostics tarball.
- endpoints Mapping[str, str]
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- output_
uri str - A URI pointing to the location of the stdout and stderr of the workload.
- approximate
Usage Property Map - Approximate workload resource usage, calculated when the workload completes (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).Note: This metric calculation may change in the future, for example, to capture cumulative workload resource consumption during workload execution (see the Dataproc Serverless release notes (https://cloud.google.com/dataproc-serverless/docs/release-notes) for announcements, changes, fixes and other Dataproc developments).
- current
Usage Property Map - Snapshot of current workload resource usage.
- diagnostic
Output StringUri - A URI pointing to the location of the diagnostics tarball.
- endpoints Map<String>
- Map of remote access endpoints (such as web interfaces and APIs) to their URIs.
- output
Uri String - A URI pointing to the location of the stdout and stderr of the workload.
SessionStateHistoryResponse, SessionStateHistoryResponseArgs
- State string
- The state of the session at this point in the session history.
- State
Message string - Details about the state at this point in the session history.
- State
Start stringTime - The time when the session entered the historical state.
- State string
- The state of the session at this point in the session history.
- State
Message string - Details about the state at this point in the session history.
- State
Start stringTime - The time when the session entered the historical state.
- state String
- The state of the session at this point in the session history.
- state
Message String - Details about the state at this point in the session history.
- state
Start StringTime - The time when the session entered the historical state.
- state string
- The state of the session at this point in the session history.
- state
Message string - Details about the state at this point in the session history.
- state
Start stringTime - The time when the session entered the historical state.
- state str
- The state of the session at this point in the session history.
- state_
message str - Details about the state at this point in the session history.
- state_
start_ strtime - The time when the session entered the historical state.
- state String
- The state of the session at this point in the session history.
- state
Message String - Details about the state at this point in the session history.
- state
Start StringTime - The time when the session entered the historical state.
SparkHistoryServerConfig, SparkHistoryServerConfigArgs
- Dataproc
Cluster string - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- Dataproc
Cluster string - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster String - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster string - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc_
cluster str - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster String - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
SparkHistoryServerConfigResponse, SparkHistoryServerConfigResponseArgs
- Dataproc
Cluster string - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- Dataproc
Cluster string - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster String - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster string - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc_
cluster str - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
- dataproc
Cluster String - Optional. Resource name of an existing Dataproc Cluster to act as a Spark History Server for the workload.Example: projects/[project_id]/regions/[region]/clusters/[cluster_name]
UsageMetricsResponse, UsageMetricsResponseArgs
- Accelerator
Type string - Optional. Accelerator type being used, if any
- Milli
Accelerator stringSeconds - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Milli
Dcu stringSeconds - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Shuffle
Storage stringGb Seconds - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Accelerator
Type string - Optional. Accelerator type being used, if any
- Milli
Accelerator stringSeconds - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Milli
Dcu stringSeconds - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Shuffle
Storage stringGb Seconds - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- accelerator
Type String - Optional. Accelerator type being used, if any
- milli
Accelerator StringSeconds - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milli
Dcu StringSeconds - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage StringGb Seconds - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- accelerator
Type string - Optional. Accelerator type being used, if any
- milli
Accelerator stringSeconds - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milli
Dcu stringSeconds - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage stringGb Seconds - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- accelerator_
type str - Optional. Accelerator type being used, if any
- milli_
accelerator_ strseconds - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milli_
dcu_ strseconds - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle_
storage_ strgb_ seconds - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- accelerator
Type String - Optional. Accelerator type being used, if any
- milli
Accelerator StringSeconds - Optional. Accelerator usage in (milliAccelerator x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- milli
Dcu StringSeconds - Optional. DCU (Dataproc Compute Units) usage in (milliDCU x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage StringGb Seconds - Optional. Shuffle storage usage in (GB x seconds) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
UsageSnapshotResponse, UsageSnapshotResponseArgs
- Accelerator
Type string - Optional. Accelerator type being used, if any
- Milli
Accelerator string - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- Milli
Dcu string - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- string
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Shuffle
Storage stringGb - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- string
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- Snapshot
Time string - Optional. The timestamp of the usage snapshot.
- Accelerator
Type string - Optional. Accelerator type being used, if any
- Milli
Accelerator string - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- Milli
Dcu string - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- string
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- Shuffle
Storage stringGb - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- string
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- Snapshot
Time string - Optional. The timestamp of the usage snapshot.
- accelerator
Type String - Optional. Accelerator type being used, if any
- milli
Accelerator String - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milli
Dcu String - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- String
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage StringGb - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- String
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshot
Time String - Optional. The timestamp of the usage snapshot.
- accelerator
Type string - Optional. Accelerator type being used, if any
- milli
Accelerator string - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milli
Dcu string - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- string
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage stringGb - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- string
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshot
Time string - Optional. The timestamp of the usage snapshot.
- accelerator_
type str - Optional. Accelerator type being used, if any
- milli_
accelerator str - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milli_
dcu str - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- str
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle_
storage_ strgb - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- str
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshot_
time str - Optional. The timestamp of the usage snapshot.
- accelerator
Type String - Optional. Accelerator type being used, if any
- milli
Accelerator String - Optional. Milli (one-thousandth) accelerator. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- milli
Dcu String - Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- String
- Optional. Milli (one-thousandth) Dataproc Compute Units (DCUs) charged at premium tier (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing)).
- shuffle
Storage StringGb - Optional. Shuffle Storage in gigabytes (GB). (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- String
- Optional. Shuffle Storage in gigabytes (GB) charged at premium tier. (see Dataproc Serverless pricing (https://cloud.google.com/dataproc-serverless/pricing))
- snapshot
Time String - Optional. The timestamp of the usage snapshot.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.