1. Packages
  2. Google Cloud Native
  3. API Docs
  4. aiplatform
  5. aiplatform/v1
  6. getIndexEndpoint

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.aiplatform/v1.getIndexEndpoint

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Gets an IndexEndpoint.

    Using getIndexEndpoint

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getIndexEndpoint(args: GetIndexEndpointArgs, opts?: InvokeOptions): Promise<GetIndexEndpointResult>
    function getIndexEndpointOutput(args: GetIndexEndpointOutputArgs, opts?: InvokeOptions): Output<GetIndexEndpointResult>
    def get_index_endpoint(index_endpoint_id: Optional[str] = None,
                           location: Optional[str] = None,
                           project: Optional[str] = None,
                           opts: Optional[InvokeOptions] = None) -> GetIndexEndpointResult
    def get_index_endpoint_output(index_endpoint_id: Optional[pulumi.Input[str]] = None,
                           location: Optional[pulumi.Input[str]] = None,
                           project: Optional[pulumi.Input[str]] = None,
                           opts: Optional[InvokeOptions] = None) -> Output[GetIndexEndpointResult]
    func LookupIndexEndpoint(ctx *Context, args *LookupIndexEndpointArgs, opts ...InvokeOption) (*LookupIndexEndpointResult, error)
    func LookupIndexEndpointOutput(ctx *Context, args *LookupIndexEndpointOutputArgs, opts ...InvokeOption) LookupIndexEndpointResultOutput

    > Note: This function is named LookupIndexEndpoint in the Go SDK.

    public static class GetIndexEndpoint 
    {
        public static Task<GetIndexEndpointResult> InvokeAsync(GetIndexEndpointArgs args, InvokeOptions? opts = null)
        public static Output<GetIndexEndpointResult> Invoke(GetIndexEndpointInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetIndexEndpointResult> getIndexEndpoint(GetIndexEndpointArgs args, InvokeOptions options)
    // Output-based functions aren't available in Java yet
    
    fn::invoke:
      function: google-native:aiplatform/v1:getIndexEndpoint
      arguments:
        # arguments dictionary

    The following arguments are supported:

    IndexEndpointId string
    Location string
    Project string
    IndexEndpointId string
    Location string
    Project string
    indexEndpointId String
    location String
    project String
    indexEndpointId string
    location string
    project string
    indexEndpointId String
    location String
    project String

    getIndexEndpoint Result

    The following output properties are available:

    CreateTime string
    Timestamp when this IndexEndpoint was created.
    DeployedIndexes List<Pulumi.GoogleNative.Aiplatform.V1.Outputs.GoogleCloudAiplatformV1DeployedIndexResponse>
    The indexes deployed in this endpoint.
    Description string
    The description of the IndexEndpoint.
    DisplayName string
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    EnablePrivateServiceConnect bool
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    EncryptionSpec Pulumi.GoogleNative.Aiplatform.V1.Outputs.GoogleCloudAiplatformV1EncryptionSpecResponse
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    Etag string
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    Labels Dictionary<string, string>
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    Name string
    The resource name of the IndexEndpoint.
    Network string
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    PrivateServiceConnectConfig Pulumi.GoogleNative.Aiplatform.V1.Outputs.GoogleCloudAiplatformV1PrivateServiceConnectConfigResponse
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    PublicEndpointDomainName string
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    PublicEndpointEnabled bool
    Optional. If true, the deployed index will be accessible through public endpoint.
    UpdateTime string
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
    CreateTime string
    Timestamp when this IndexEndpoint was created.
    DeployedIndexes []GoogleCloudAiplatformV1DeployedIndexResponse
    The indexes deployed in this endpoint.
    Description string
    The description of the IndexEndpoint.
    DisplayName string
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    EnablePrivateServiceConnect bool
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    EncryptionSpec GoogleCloudAiplatformV1EncryptionSpecResponse
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    Etag string
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    Labels map[string]string
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    Name string
    The resource name of the IndexEndpoint.
    Network string
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    PrivateServiceConnectConfig GoogleCloudAiplatformV1PrivateServiceConnectConfigResponse
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    PublicEndpointDomainName string
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    PublicEndpointEnabled bool
    Optional. If true, the deployed index will be accessible through public endpoint.
    UpdateTime string
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
    createTime String
    Timestamp when this IndexEndpoint was created.
    deployedIndexes List<GoogleCloudAiplatformV1DeployedIndexResponse>
    The indexes deployed in this endpoint.
    description String
    The description of the IndexEndpoint.
    displayName String
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    enablePrivateServiceConnect Boolean
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    encryptionSpec GoogleCloudAiplatformV1EncryptionSpecResponse
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    etag String
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Map<String,String>
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    name String
    The resource name of the IndexEndpoint.
    network String
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    privateServiceConnectConfig GoogleCloudAiplatformV1PrivateServiceConnectConfigResponse
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    publicEndpointDomainName String
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    publicEndpointEnabled Boolean
    Optional. If true, the deployed index will be accessible through public endpoint.
    updateTime String
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
    createTime string
    Timestamp when this IndexEndpoint was created.
    deployedIndexes GoogleCloudAiplatformV1DeployedIndexResponse[]
    The indexes deployed in this endpoint.
    description string
    The description of the IndexEndpoint.
    displayName string
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    enablePrivateServiceConnect boolean
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    encryptionSpec GoogleCloudAiplatformV1EncryptionSpecResponse
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    etag string
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels {[key: string]: string}
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    name string
    The resource name of the IndexEndpoint.
    network string
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    privateServiceConnectConfig GoogleCloudAiplatformV1PrivateServiceConnectConfigResponse
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    publicEndpointDomainName string
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    publicEndpointEnabled boolean
    Optional. If true, the deployed index will be accessible through public endpoint.
    updateTime string
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
    create_time str
    Timestamp when this IndexEndpoint was created.
    deployed_indexes Sequence[GoogleCloudAiplatformV1DeployedIndexResponse]
    The indexes deployed in this endpoint.
    description str
    The description of the IndexEndpoint.
    display_name str
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    enable_private_service_connect bool
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    encryption_spec GoogleCloudAiplatformV1EncryptionSpecResponse
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    etag str
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Mapping[str, str]
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    name str
    The resource name of the IndexEndpoint.
    network str
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    private_service_connect_config GoogleCloudAiplatformV1PrivateServiceConnectConfigResponse
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    public_endpoint_domain_name str
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    public_endpoint_enabled bool
    Optional. If true, the deployed index will be accessible through public endpoint.
    update_time str
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.
    createTime String
    Timestamp when this IndexEndpoint was created.
    deployedIndexes List<Property Map>
    The indexes deployed in this endpoint.
    description String
    The description of the IndexEndpoint.
    displayName String
    The display name of the IndexEndpoint. The name can be up to 128 characters long and can consist of any UTF-8 characters.
    enablePrivateServiceConnect Boolean
    Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    Deprecated: Optional. Deprecated: If true, expose the IndexEndpoint via private service connect. Only one of the fields, network or enable_private_service_connect, can be set.

    encryptionSpec Property Map
    Immutable. Customer-managed encryption key spec for an IndexEndpoint. If set, this IndexEndpoint and all sub-resources of this IndexEndpoint will be secured by this key.
    etag String
    Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
    labels Map<String>
    The labels with user-defined metadata to organize your IndexEndpoints. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.
    name String
    The resource name of the IndexEndpoint.
    network String
    Optional. The full name of the Google Compute Engine network to which the IndexEndpoint should be peered. Private services access must already be configured for the network. If left unspecified, the Endpoint is not peered with any network. network and private_service_connect_config are mutually exclusive. Format: projects/{project}/global/networks/{network}. Where {project} is a project number, as in '12345', and {network} is network name.
    privateServiceConnectConfig Property Map
    Optional. Configuration for private service connect. network and private_service_connect_config are mutually exclusive.
    publicEndpointDomainName String
    If public_endpoint_enabled is true, this field will be populated with the domain name to use for this index endpoint.
    publicEndpointEnabled Boolean
    Optional. If true, the deployed index will be accessible through public endpoint.
    updateTime String
    Timestamp when this IndexEndpoint was last updated. This timestamp is not updated when the endpoint's DeployedIndexes are updated, e.g. due to updates of the original Indexes they are the deployments of.

    Supporting Types

    GoogleCloudAiplatformV1AutomaticResourcesResponse

    MaxReplicaCount int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    MinReplicaCount int
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
    MaxReplicaCount int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    MinReplicaCount int
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
    maxReplicaCount Integer
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    minReplicaCount Integer
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
    maxReplicaCount number
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    minReplicaCount number
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
    max_replica_count int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    min_replica_count int
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.
    maxReplicaCount Number
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, a no upper bound for scaling under heavy traffic will be assume, though Vertex AI may be unable to scale beyond certain replica number.
    minReplicaCount Number
    Immutable. The minimum number of replicas this DeployedModel will be always deployed on. If traffic against it increases, it may dynamically be deployed onto more replicas up to max_replica_count, and as traffic decreases, some of these extra replicas may be freed. If the requested value is too large, the deployment will error.

    GoogleCloudAiplatformV1AutoscalingMetricSpecResponse

    MetricName string
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    Target int
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
    MetricName string
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    Target int
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
    metricName String
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    target Integer
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
    metricName string
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    target number
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
    metric_name str
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    target int
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
    metricName String
    The resource metric name. Supported metrics: * For Online Prediction: * aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle * aiplatform.googleapis.com/prediction/online/cpu/utilization
    target Number
    The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.

    GoogleCloudAiplatformV1DedicatedResourcesResponse

    AutoscalingMetricSpecs List<Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1AutoscalingMetricSpecResponse>
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    MachineSpec Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1MachineSpecResponse
    Immutable. The specification of a single machine used by the prediction.
    MaxReplicaCount int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    MinReplicaCount int
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
    AutoscalingMetricSpecs []GoogleCloudAiplatformV1AutoscalingMetricSpecResponse
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    MachineSpec GoogleCloudAiplatformV1MachineSpecResponse
    Immutable. The specification of a single machine used by the prediction.
    MaxReplicaCount int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    MinReplicaCount int
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
    autoscalingMetricSpecs List<GoogleCloudAiplatformV1AutoscalingMetricSpecResponse>
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    machineSpec GoogleCloudAiplatformV1MachineSpecResponse
    Immutable. The specification of a single machine used by the prediction.
    maxReplicaCount Integer
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    minReplicaCount Integer
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
    autoscalingMetricSpecs GoogleCloudAiplatformV1AutoscalingMetricSpecResponse[]
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    machineSpec GoogleCloudAiplatformV1MachineSpecResponse
    Immutable. The specification of a single machine used by the prediction.
    maxReplicaCount number
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    minReplicaCount number
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
    autoscaling_metric_specs Sequence[GoogleCloudAiplatformV1AutoscalingMetricSpecResponse]
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    machine_spec GoogleCloudAiplatformV1MachineSpecResponse
    Immutable. The specification of a single machine used by the prediction.
    max_replica_count int
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    min_replica_count int
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
    autoscalingMetricSpecs List<Property Map>
    Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If machine_spec.accelerator_count is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If machine_spec.accelerator_count is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set autoscaling_metric_specs.metric_name to aiplatform.googleapis.com/prediction/online/cpu/utilization and autoscaling_metric_specs.target to 80.
    machineSpec Property Map
    Immutable. The specification of a single machine used by the prediction.
    maxReplicaCount Number
    Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for (max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
    minReplicaCount Number
    Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.

    GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProviderResponse

    AllowedIssuers List<string>
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    Audiences List<string>
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.
    AllowedIssuers []string
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    Audiences []string
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.
    allowedIssuers List<String>
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    audiences List<String>
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.
    allowedIssuers string[]
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    audiences string[]
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.
    allowed_issuers Sequence[str]
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    audiences Sequence[str]
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.
    allowedIssuers List<String>
    A list of allowed JWT issuers. Each entry must be a valid Google service account, in the following format: service-account-name@project-id.iam.gserviceaccount.com
    audiences List<String>
    The list of JWT audiences. that are allowed to access. A JWT containing any of these audiences will be accepted.

    GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse

    AuthProvider GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProviderResponse
    Defines the authentication provider that the DeployedIndex uses.
    authProvider GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProviderResponse
    Defines the authentication provider that the DeployedIndex uses.
    authProvider GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProviderResponse
    Defines the authentication provider that the DeployedIndex uses.
    auth_provider GoogleCloudAiplatformV1DeployedIndexAuthConfigAuthProviderResponse
    Defines the authentication provider that the DeployedIndex uses.
    authProvider Property Map
    Defines the authentication provider that the DeployedIndex uses.

    GoogleCloudAiplatformV1DeployedIndexResponse

    AutomaticResources Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1AutomaticResourcesResponse
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    CreateTime string
    Timestamp when the DeployedIndex was created.
    DedicatedResources Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1DedicatedResourcesResponse
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    DeployedIndexAuthConfig Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse
    Optional. If set, the authentication is enabled for the private endpoint.
    DeploymentGroup string
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    DisplayName string
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    EnableAccessLogging bool
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    Index string
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    IndexSyncTime string
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    PrivateEndpoints Pulumi.GoogleNative.Aiplatform.V1.Inputs.GoogleCloudAiplatformV1IndexPrivateEndpointsResponse
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    ReservedIpRanges List<string>
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
    AutomaticResources GoogleCloudAiplatformV1AutomaticResourcesResponse
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    CreateTime string
    Timestamp when the DeployedIndex was created.
    DedicatedResources GoogleCloudAiplatformV1DedicatedResourcesResponse
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    DeployedIndexAuthConfig GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse
    Optional. If set, the authentication is enabled for the private endpoint.
    DeploymentGroup string
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    DisplayName string
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    EnableAccessLogging bool
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    Index string
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    IndexSyncTime string
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    PrivateEndpoints GoogleCloudAiplatformV1IndexPrivateEndpointsResponse
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    ReservedIpRanges []string
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
    automaticResources GoogleCloudAiplatformV1AutomaticResourcesResponse
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    createTime String
    Timestamp when the DeployedIndex was created.
    dedicatedResources GoogleCloudAiplatformV1DedicatedResourcesResponse
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    deployedIndexAuthConfig GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse
    Optional. If set, the authentication is enabled for the private endpoint.
    deploymentGroup String
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    displayName String
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    enableAccessLogging Boolean
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    index String
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    indexSyncTime String
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    privateEndpoints GoogleCloudAiplatformV1IndexPrivateEndpointsResponse
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    reservedIpRanges List<String>
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
    automaticResources GoogleCloudAiplatformV1AutomaticResourcesResponse
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    createTime string
    Timestamp when the DeployedIndex was created.
    dedicatedResources GoogleCloudAiplatformV1DedicatedResourcesResponse
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    deployedIndexAuthConfig GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse
    Optional. If set, the authentication is enabled for the private endpoint.
    deploymentGroup string
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    displayName string
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    enableAccessLogging boolean
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    index string
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    indexSyncTime string
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    privateEndpoints GoogleCloudAiplatformV1IndexPrivateEndpointsResponse
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    reservedIpRanges string[]
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
    automatic_resources GoogleCloudAiplatformV1AutomaticResourcesResponse
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    create_time str
    Timestamp when the DeployedIndex was created.
    dedicated_resources GoogleCloudAiplatformV1DedicatedResourcesResponse
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    deployed_index_auth_config GoogleCloudAiplatformV1DeployedIndexAuthConfigResponse
    Optional. If set, the authentication is enabled for the private endpoint.
    deployment_group str
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    display_name str
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    enable_access_logging bool
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    index str
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    index_sync_time str
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    private_endpoints GoogleCloudAiplatformV1IndexPrivateEndpointsResponse
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    reserved_ip_ranges Sequence[str]
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.
    automaticResources Property Map
    Optional. A description of resources that the DeployedIndex uses, which to large degree are decided by Vertex AI, and optionally allows only a modest additional configuration. If min_replica_count is not set, the default value is 2 (we don't provide SLA when min_replica_count=1). If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000.
    createTime String
    Timestamp when the DeployedIndex was created.
    dedicatedResources Property Map
    Optional. A description of resources that are dedicated to the DeployedIndex, and that need a higher degree of manual configuration. The field min_replica_count must be set to a value strictly greater than 0, or else validation will fail. We don't provide SLA when min_replica_count=1. If max_replica_count is not set, the default value is min_replica_count. The max allowed replica count is 1000. Available machine types for SMALL shard: e2-standard-2 and all machine types available for MEDIUM and LARGE shard. Available machine types for MEDIUM shard: e2-standard-16 and all machine types available for LARGE shard. Available machine types for LARGE shard: e2-highmem-16, n2d-standard-32. n1-standard-16 and n1-standard-32 are still available, but we recommend e2-standard-16 and e2-highmem-16 for cost efficiency.
    deployedIndexAuthConfig Property Map
    Optional. If set, the authentication is enabled for the private endpoint.
    deploymentGroup String
    Optional. The deployment group can be no longer than 64 characters (eg: 'test', 'prod'). If not set, we will use the 'default' deployment group. Creating deployment_groups with reserved_ip_ranges is a recommended practice when the peered network has multiple peering ranges. This creates your deployments from predictable IP spaces for easier traffic administration. Also, one deployment_group (except 'default') can only be used with the same reserved_ip_ranges which means if the deployment_group has been used with reserved_ip_ranges: [a, b, c], using it with [a, b] or [d, e] is disallowed. Note: we only support up to 5 deployment groups(not including 'default').
    displayName String
    The display name of the DeployedIndex. If not provided upon creation, the Index's display_name is used.
    enableAccessLogging Boolean
    Optional. If true, private endpoint's access logs are sent to Cloud Logging. These logs are like standard server access logs, containing information like timestamp and latency for each MatchRequest. Note that logs may incur a cost, especially if the deployed index receives a high queries per second rate (QPS). Estimate your costs before enabling this option.
    index String
    The name of the Index this is the deployment of. We may refer to this Index as the DeployedIndex's "original" Index.
    indexSyncTime String
    The DeployedIndex may depend on various data on its original Index. Additionally when certain changes to the original Index are being done (e.g. when what the Index contains is being changed) the DeployedIndex may be asynchronously updated in the background to reflect these changes. If this timestamp's value is at least the Index.update_time of the original Index, it means that this DeployedIndex and the original Index are in sync. If this timestamp is older, then to see which updates this DeployedIndex already contains (and which it does not), one must list the operations that are running on the original Index. Only the successfully completed Operations with update_time equal or before this sync time are contained in this DeployedIndex.
    privateEndpoints Property Map
    Provides paths for users to send requests directly to the deployed index services running on Cloud via private services access. This field is populated if network is configured.
    reservedIpRanges List<String>
    Optional. A list of reserved ip ranges under the VPC network that can be used for this DeployedIndex. If set, we will deploy the index within the provided ip ranges. Otherwise, the index might be deployed to any ip ranges under the provided VPC network. The value should be the name of the address (https://cloud.google.com/compute/docs/reference/rest/v1/addresses) Example: ['vertex-ai-ip-range']. For more information about subnets and network IP ranges, please see https://cloud.google.com/vpc/docs/subnets#manually_created_subnet_ip_ranges.

    GoogleCloudAiplatformV1EncryptionSpecResponse

    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    KmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName string
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kms_key_name str
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.
    kmsKeyName String
    The Cloud KMS resource identifier of the customer managed encryption key used to protect a resource. Has the form: projects/my-project/locations/my-region/keyRings/my-kr/cryptoKeys/my-key. The key needs to be in the same region as where the compute resource is created.

    GoogleCloudAiplatformV1IndexPrivateEndpointsResponse

    MatchGrpcAddress string
    The ip address used to send match gRPC requests.
    ServiceAttachment string
    The name of the service attachment resource. Populated if private service connect is enabled.
    MatchGrpcAddress string
    The ip address used to send match gRPC requests.
    ServiceAttachment string
    The name of the service attachment resource. Populated if private service connect is enabled.
    matchGrpcAddress String
    The ip address used to send match gRPC requests.
    serviceAttachment String
    The name of the service attachment resource. Populated if private service connect is enabled.
    matchGrpcAddress string
    The ip address used to send match gRPC requests.
    serviceAttachment string
    The name of the service attachment resource. Populated if private service connect is enabled.
    match_grpc_address str
    The ip address used to send match gRPC requests.
    service_attachment str
    The name of the service attachment resource. Populated if private service connect is enabled.
    matchGrpcAddress String
    The ip address used to send match gRPC requests.
    serviceAttachment String
    The name of the service attachment resource. Populated if private service connect is enabled.

    GoogleCloudAiplatformV1MachineSpecResponse

    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    AcceleratorCount int
    The number of accelerators to attach to the machine.
    AcceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    MachineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    TpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Integer
    The number of accelerators to attach to the machine.
    acceleratorType String
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount number
    The number of accelerators to attach to the machine.
    acceleratorType string
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType string
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology string
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    accelerator_count int
    The number of accelerators to attach to the machine.
    accelerator_type str
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machine_type str
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpu_topology str
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").
    acceleratorCount Number
    The number of accelerators to attach to the machine.
    acceleratorType String
    Immutable. The type of accelerator(s) that may be attached to the machine as per accelerator_count.
    machineType String
    Immutable. The type of the machine. See the list of machine types supported for prediction See the list of machine types supported for custom training. For DeployedModel this field is optional, and the default value is n1-standard-2. For BatchPredictionJob or as part of WorkerPoolSpec this field is required.
    tpuTopology String
    Immutable. The topology of the TPUs. Corresponds to the TPU topologies available from GKE. (Example: tpu_topology: "2x2x1").

    GoogleCloudAiplatformV1PrivateServiceConnectConfigResponse

    EnablePrivateServiceConnect bool
    If true, expose the IndexEndpoint via private service connect.
    ProjectAllowlist List<string>
    A list of Projects from which the forwarding rule will target the service attachment.
    EnablePrivateServiceConnect bool
    If true, expose the IndexEndpoint via private service connect.
    ProjectAllowlist []string
    A list of Projects from which the forwarding rule will target the service attachment.
    enablePrivateServiceConnect Boolean
    If true, expose the IndexEndpoint via private service connect.
    projectAllowlist List<String>
    A list of Projects from which the forwarding rule will target the service attachment.
    enablePrivateServiceConnect boolean
    If true, expose the IndexEndpoint via private service connect.
    projectAllowlist string[]
    A list of Projects from which the forwarding rule will target the service attachment.
    enable_private_service_connect bool
    If true, expose the IndexEndpoint via private service connect.
    project_allowlist Sequence[str]
    A list of Projects from which the forwarding rule will target the service attachment.
    enablePrivateServiceConnect Boolean
    If true, expose the IndexEndpoint via private service connect.
    projectAllowlist List<String>
    A list of Projects from which the forwarding rule will target the service attachment.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi