1. Packages
  2. Google Cloud Native
  3. API Docs
  4. compute
  5. compute/beta
  6. RegionAutoscaler

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

google-native.compute/beta.RegionAutoscaler

Explore with Pulumi AI

google-native logo

Google Cloud Native is in preview. Google Cloud Classic is fully supported.

Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi

    Creates an autoscaler in the specified project using the data included in the request.

    Create RegionAutoscaler Resource

    Resources are created with functions called constructors. To learn more about declaring and configuring resources, see Resources.

    Constructor syntax

    new RegionAutoscaler(name: string, args: RegionAutoscalerArgs, opts?: CustomResourceOptions);
    @overload
    def RegionAutoscaler(resource_name: str,
                         args: RegionAutoscalerArgs,
                         opts: Optional[ResourceOptions] = None)
    
    @overload
    def RegionAutoscaler(resource_name: str,
                         opts: Optional[ResourceOptions] = None,
                         region: Optional[str] = None,
                         autoscaling_policy: Optional[AutoscalingPolicyArgs] = None,
                         description: Optional[str] = None,
                         name: Optional[str] = None,
                         project: Optional[str] = None,
                         request_id: Optional[str] = None,
                         target: Optional[str] = None)
    func NewRegionAutoscaler(ctx *Context, name string, args RegionAutoscalerArgs, opts ...ResourceOption) (*RegionAutoscaler, error)
    public RegionAutoscaler(string name, RegionAutoscalerArgs args, CustomResourceOptions? opts = null)
    public RegionAutoscaler(String name, RegionAutoscalerArgs args)
    public RegionAutoscaler(String name, RegionAutoscalerArgs args, CustomResourceOptions options)
    
    type: google-native:compute/beta:RegionAutoscaler
    properties: # The arguments to resource properties.
    options: # Bag of options to control resource's behavior.
    
    

    Parameters

    name string
    The unique name of the resource.
    args RegionAutoscalerArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    resource_name str
    The unique name of the resource.
    args RegionAutoscalerArgs
    The arguments to resource properties.
    opts ResourceOptions
    Bag of options to control resource's behavior.
    ctx Context
    Context object for the current deployment.
    name string
    The unique name of the resource.
    args RegionAutoscalerArgs
    The arguments to resource properties.
    opts ResourceOption
    Bag of options to control resource's behavior.
    name string
    The unique name of the resource.
    args RegionAutoscalerArgs
    The arguments to resource properties.
    opts CustomResourceOptions
    Bag of options to control resource's behavior.
    name String
    The unique name of the resource.
    args RegionAutoscalerArgs
    The arguments to resource properties.
    options CustomResourceOptions
    Bag of options to control resource's behavior.

    Constructor example

    The following reference example uses placeholder values for all input properties.

    var google_nativeRegionAutoscalerResource = new GoogleNative.Compute.Beta.RegionAutoscaler("google-nativeRegionAutoscalerResource", new()
    {
        Region = "string",
        AutoscalingPolicy = new GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyArgs
        {
            CoolDownPeriodSec = 0,
            CpuUtilization = new GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyCpuUtilizationArgs
            {
                PredictiveMethod = GoogleNative.Compute.Beta.AutoscalingPolicyCpuUtilizationPredictiveMethod.None,
                UtilizationTarget = 0,
            },
            CustomMetricUtilizations = new[]
            {
                new GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyCustomMetricUtilizationArgs
                {
                    Filter = "string",
                    Metric = "string",
                    SingleInstanceAssignment = 0,
                    UtilizationTarget = 0,
                    UtilizationTargetType = GoogleNative.Compute.Beta.AutoscalingPolicyCustomMetricUtilizationUtilizationTargetType.DeltaPerMinute,
                },
            },
            LoadBalancingUtilization = new GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyLoadBalancingUtilizationArgs
            {
                UtilizationTarget = 0,
            },
            MaxNumReplicas = 0,
            MinNumReplicas = 0,
            Mode = GoogleNative.Compute.Beta.AutoscalingPolicyMode.Off,
            ScaleDownControl = new GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyScaleDownControlArgs
            {
                MaxScaledDownReplicas = new GoogleNative.Compute.Beta.Inputs.FixedOrPercentArgs
                {
                    Fixed = 0,
                    Percent = 0,
                },
                TimeWindowSec = 0,
            },
            ScaleInControl = new GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyScaleInControlArgs
            {
                MaxScaledInReplicas = new GoogleNative.Compute.Beta.Inputs.FixedOrPercentArgs
                {
                    Fixed = 0,
                    Percent = 0,
                },
                TimeWindowSec = 0,
            },
            ScalingSchedules = 
            {
                { "string", "string" },
            },
        },
        Description = "string",
        Name = "string",
        Project = "string",
        RequestId = "string",
        Target = "string",
    });
    
    example, err := computebeta.NewRegionAutoscaler(ctx, "google-nativeRegionAutoscalerResource", &computebeta.RegionAutoscalerArgs{
    	Region: pulumi.String("string"),
    	AutoscalingPolicy: &compute.AutoscalingPolicyArgs{
    		CoolDownPeriodSec: pulumi.Int(0),
    		CpuUtilization: &compute.AutoscalingPolicyCpuUtilizationArgs{
    			PredictiveMethod:  computebeta.AutoscalingPolicyCpuUtilizationPredictiveMethodNone,
    			UtilizationTarget: pulumi.Float64(0),
    		},
    		CustomMetricUtilizations: compute.AutoscalingPolicyCustomMetricUtilizationArray{
    			&compute.AutoscalingPolicyCustomMetricUtilizationArgs{
    				Filter:                   pulumi.String("string"),
    				Metric:                   pulumi.String("string"),
    				SingleInstanceAssignment: pulumi.Float64(0),
    				UtilizationTarget:        pulumi.Float64(0),
    				UtilizationTargetType:    computebeta.AutoscalingPolicyCustomMetricUtilizationUtilizationTargetTypeDeltaPerMinute,
    			},
    		},
    		LoadBalancingUtilization: &compute.AutoscalingPolicyLoadBalancingUtilizationArgs{
    			UtilizationTarget: pulumi.Float64(0),
    		},
    		MaxNumReplicas: pulumi.Int(0),
    		MinNumReplicas: pulumi.Int(0),
    		Mode:           computebeta.AutoscalingPolicyModeOff,
    		ScaleDownControl: &compute.AutoscalingPolicyScaleDownControlArgs{
    			MaxScaledDownReplicas: &compute.FixedOrPercentArgs{
    				Fixed:   pulumi.Int(0),
    				Percent: pulumi.Int(0),
    			},
    			TimeWindowSec: pulumi.Int(0),
    		},
    		ScaleInControl: &compute.AutoscalingPolicyScaleInControlArgs{
    			MaxScaledInReplicas: &compute.FixedOrPercentArgs{
    				Fixed:   pulumi.Int(0),
    				Percent: pulumi.Int(0),
    			},
    			TimeWindowSec: pulumi.Int(0),
    		},
    		ScalingSchedules: pulumi.StringMap{
    			"string": pulumi.String("string"),
    		},
    	},
    	Description: pulumi.String("string"),
    	Name:        pulumi.String("string"),
    	Project:     pulumi.String("string"),
    	RequestId:   pulumi.String("string"),
    	Target:      pulumi.String("string"),
    })
    
    var google_nativeRegionAutoscalerResource = new RegionAutoscaler("google-nativeRegionAutoscalerResource", RegionAutoscalerArgs.builder()
        .region("string")
        .autoscalingPolicy(AutoscalingPolicyArgs.builder()
            .coolDownPeriodSec(0)
            .cpuUtilization(AutoscalingPolicyCpuUtilizationArgs.builder()
                .predictiveMethod("NONE")
                .utilizationTarget(0)
                .build())
            .customMetricUtilizations(AutoscalingPolicyCustomMetricUtilizationArgs.builder()
                .filter("string")
                .metric("string")
                .singleInstanceAssignment(0)
                .utilizationTarget(0)
                .utilizationTargetType("DELTA_PER_MINUTE")
                .build())
            .loadBalancingUtilization(AutoscalingPolicyLoadBalancingUtilizationArgs.builder()
                .utilizationTarget(0)
                .build())
            .maxNumReplicas(0)
            .minNumReplicas(0)
            .mode("OFF")
            .scaleDownControl(AutoscalingPolicyScaleDownControlArgs.builder()
                .maxScaledDownReplicas(FixedOrPercentArgs.builder()
                    .fixed(0)
                    .percent(0)
                    .build())
                .timeWindowSec(0)
                .build())
            .scaleInControl(AutoscalingPolicyScaleInControlArgs.builder()
                .maxScaledInReplicas(FixedOrPercentArgs.builder()
                    .fixed(0)
                    .percent(0)
                    .build())
                .timeWindowSec(0)
                .build())
            .scalingSchedules(Map.of("string", "string"))
            .build())
        .description("string")
        .name("string")
        .project("string")
        .requestId("string")
        .target("string")
        .build());
    
    google_native_region_autoscaler_resource = google_native.compute.beta.RegionAutoscaler("google-nativeRegionAutoscalerResource",
        region="string",
        autoscaling_policy=google_native.compute.beta.AutoscalingPolicyArgs(
            cool_down_period_sec=0,
            cpu_utilization=google_native.compute.beta.AutoscalingPolicyCpuUtilizationArgs(
                predictive_method=google_native.compute.beta.AutoscalingPolicyCpuUtilizationPredictiveMethod.NONE,
                utilization_target=0,
            ),
            custom_metric_utilizations=[google_native.compute.beta.AutoscalingPolicyCustomMetricUtilizationArgs(
                filter="string",
                metric="string",
                single_instance_assignment=0,
                utilization_target=0,
                utilization_target_type=google_native.compute.beta.AutoscalingPolicyCustomMetricUtilizationUtilizationTargetType.DELTA_PER_MINUTE,
            )],
            load_balancing_utilization=google_native.compute.beta.AutoscalingPolicyLoadBalancingUtilizationArgs(
                utilization_target=0,
            ),
            max_num_replicas=0,
            min_num_replicas=0,
            mode=google_native.compute.beta.AutoscalingPolicyMode.OFF,
            scale_down_control=google_native.compute.beta.AutoscalingPolicyScaleDownControlArgs(
                max_scaled_down_replicas=google_native.compute.beta.FixedOrPercentArgs(
                    fixed=0,
                    percent=0,
                ),
                time_window_sec=0,
            ),
            scale_in_control=google_native.compute.beta.AutoscalingPolicyScaleInControlArgs(
                max_scaled_in_replicas=google_native.compute.beta.FixedOrPercentArgs(
                    fixed=0,
                    percent=0,
                ),
                time_window_sec=0,
            ),
            scaling_schedules={
                "string": "string",
            },
        ),
        description="string",
        name="string",
        project="string",
        request_id="string",
        target="string")
    
    const google_nativeRegionAutoscalerResource = new google_native.compute.beta.RegionAutoscaler("google-nativeRegionAutoscalerResource", {
        region: "string",
        autoscalingPolicy: {
            coolDownPeriodSec: 0,
            cpuUtilization: {
                predictiveMethod: google_native.compute.beta.AutoscalingPolicyCpuUtilizationPredictiveMethod.None,
                utilizationTarget: 0,
            },
            customMetricUtilizations: [{
                filter: "string",
                metric: "string",
                singleInstanceAssignment: 0,
                utilizationTarget: 0,
                utilizationTargetType: google_native.compute.beta.AutoscalingPolicyCustomMetricUtilizationUtilizationTargetType.DeltaPerMinute,
            }],
            loadBalancingUtilization: {
                utilizationTarget: 0,
            },
            maxNumReplicas: 0,
            minNumReplicas: 0,
            mode: google_native.compute.beta.AutoscalingPolicyMode.Off,
            scaleDownControl: {
                maxScaledDownReplicas: {
                    fixed: 0,
                    percent: 0,
                },
                timeWindowSec: 0,
            },
            scaleInControl: {
                maxScaledInReplicas: {
                    fixed: 0,
                    percent: 0,
                },
                timeWindowSec: 0,
            },
            scalingSchedules: {
                string: "string",
            },
        },
        description: "string",
        name: "string",
        project: "string",
        requestId: "string",
        target: "string",
    });
    
    type: google-native:compute/beta:RegionAutoscaler
    properties:
        autoscalingPolicy:
            coolDownPeriodSec: 0
            cpuUtilization:
                predictiveMethod: NONE
                utilizationTarget: 0
            customMetricUtilizations:
                - filter: string
                  metric: string
                  singleInstanceAssignment: 0
                  utilizationTarget: 0
                  utilizationTargetType: DELTA_PER_MINUTE
            loadBalancingUtilization:
                utilizationTarget: 0
            maxNumReplicas: 0
            minNumReplicas: 0
            mode: "OFF"
            scaleDownControl:
                maxScaledDownReplicas:
                    fixed: 0
                    percent: 0
                timeWindowSec: 0
            scaleInControl:
                maxScaledInReplicas:
                    fixed: 0
                    percent: 0
                timeWindowSec: 0
            scalingSchedules:
                string: string
        description: string
        name: string
        project: string
        region: string
        requestId: string
        target: string
    

    RegionAutoscaler Resource Properties

    To learn more about resource properties and how to use them, see Inputs and Outputs in the Architecture and Concepts docs.

    Inputs

    The RegionAutoscaler resource accepts the following input properties:

    Region string
    AutoscalingPolicy Pulumi.GoogleNative.Compute.Beta.Inputs.AutoscalingPolicy
    The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%.
    Description string
    An optional description of this resource. Provide this property when you create the resource.
    Name string
    Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    Project string
    RequestId string
    An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).
    Target string
    URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler.
    Region string
    AutoscalingPolicy AutoscalingPolicyArgs
    The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%.
    Description string
    An optional description of this resource. Provide this property when you create the resource.
    Name string
    Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    Project string
    RequestId string
    An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).
    Target string
    URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler.
    region String
    autoscalingPolicy AutoscalingPolicy
    The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%.
    description String
    An optional description of this resource. Provide this property when you create the resource.
    name String
    Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    project String
    requestId String
    An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).
    target String
    URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler.
    region string
    autoscalingPolicy AutoscalingPolicy
    The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%.
    description string
    An optional description of this resource. Provide this property when you create the resource.
    name string
    Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    project string
    requestId string
    An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).
    target string
    URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler.
    region str
    autoscaling_policy AutoscalingPolicyArgs
    The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%.
    description str
    An optional description of this resource. Provide this property when you create the resource.
    name str
    Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    project str
    request_id str
    An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).
    target str
    URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler.
    region String
    autoscalingPolicy Property Map
    The configuration parameters for the autoscaling algorithm. You can define one or more signals for an autoscaler: cpuUtilization, customMetricUtilizations, and loadBalancingUtilization. If none of these are specified, the default will be to autoscale based on cpuUtilization to 0.6 or 60%.
    description String
    An optional description of this resource. Provide this property when you create the resource.
    name String
    Name of the resource. Provided by the client when the resource is created. The name must be 1-63 characters long, and comply with RFC1035. Specifically, the name must be 1-63 characters long and match the regular expression [a-z]([-a-z0-9]*[a-z0-9])? which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
    project String
    requestId String
    An optional request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported ( 00000000-0000-0000-0000-000000000000).
    target String
    URL of the managed instance group that this autoscaler will scale. This field is required when creating an autoscaler.

    Outputs

    All input properties are implicitly available as output properties. Additionally, the RegionAutoscaler resource produces the following output properties:

    CreationTimestamp string
    Creation timestamp in RFC3339 text format.
    Id string
    The provider-assigned unique ID for this managed resource.
    Kind string
    Type of the resource. Always compute#autoscaler for autoscalers.
    RecommendedSize int
    Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction.
    ScalingScheduleStatus Dictionary<string, string>
    Status information of existing scaling schedules.
    SelfLink string
    Server-defined URL for the resource.
    Status string
    The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future.
    StatusDetails List<Pulumi.GoogleNative.Compute.Beta.Outputs.AutoscalerStatusDetailsResponse>
    Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter.
    Zone string
    URL of the zone where the instance group resides (for autoscalers living in zonal scope).
    CreationTimestamp string
    Creation timestamp in RFC3339 text format.
    Id string
    The provider-assigned unique ID for this managed resource.
    Kind string
    Type of the resource. Always compute#autoscaler for autoscalers.
    RecommendedSize int
    Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction.
    ScalingScheduleStatus map[string]string
    Status information of existing scaling schedules.
    SelfLink string
    Server-defined URL for the resource.
    Status string
    The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future.
    StatusDetails []AutoscalerStatusDetailsResponse
    Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter.
    Zone string
    URL of the zone where the instance group resides (for autoscalers living in zonal scope).
    creationTimestamp String
    Creation timestamp in RFC3339 text format.
    id String
    The provider-assigned unique ID for this managed resource.
    kind String
    Type of the resource. Always compute#autoscaler for autoscalers.
    recommendedSize Integer
    Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction.
    scalingScheduleStatus Map<String,String>
    Status information of existing scaling schedules.
    selfLink String
    Server-defined URL for the resource.
    status String
    The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future.
    statusDetails List<AutoscalerStatusDetailsResponse>
    Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter.
    zone String
    URL of the zone where the instance group resides (for autoscalers living in zonal scope).
    creationTimestamp string
    Creation timestamp in RFC3339 text format.
    id string
    The provider-assigned unique ID for this managed resource.
    kind string
    Type of the resource. Always compute#autoscaler for autoscalers.
    recommendedSize number
    Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction.
    scalingScheduleStatus {[key: string]: string}
    Status information of existing scaling schedules.
    selfLink string
    Server-defined URL for the resource.
    status string
    The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future.
    statusDetails AutoscalerStatusDetailsResponse[]
    Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter.
    zone string
    URL of the zone where the instance group resides (for autoscalers living in zonal scope).
    creation_timestamp str
    Creation timestamp in RFC3339 text format.
    id str
    The provider-assigned unique ID for this managed resource.
    kind str
    Type of the resource. Always compute#autoscaler for autoscalers.
    recommended_size int
    Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction.
    scaling_schedule_status Mapping[str, str]
    Status information of existing scaling schedules.
    self_link str
    Server-defined URL for the resource.
    status str
    The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future.
    status_details Sequence[AutoscalerStatusDetailsResponse]
    Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter.
    zone str
    URL of the zone where the instance group resides (for autoscalers living in zonal scope).
    creationTimestamp String
    Creation timestamp in RFC3339 text format.
    id String
    The provider-assigned unique ID for this managed resource.
    kind String
    Type of the resource. Always compute#autoscaler for autoscalers.
    recommendedSize Number
    Target recommended MIG size (number of instances) computed by autoscaler. Autoscaler calculates the recommended MIG size even when the autoscaling policy mode is different from ON. This field is empty when autoscaler is not connected to an existing managed instance group or autoscaler did not generate its prediction.
    scalingScheduleStatus Map<String>
    Status information of existing scaling schedules.
    selfLink String
    Server-defined URL for the resource.
    status String
    The status of the autoscaler configuration. Current set of possible values: - PENDING: Autoscaler backend hasn't read new/updated configuration. - DELETING: Configuration is being deleted. - ACTIVE: Configuration is acknowledged to be effective. Some warnings might be present in the statusDetails field. - ERROR: Configuration has errors. Actionable for users. Details are present in the statusDetails field. New values might be added in the future.
    statusDetails List<Property Map>
    Human-readable details about the current state of the autoscaler. Read the documentation for Commonly returned status messages for examples of status messages you might encounter.
    zone String
    URL of the zone where the instance group resides (for autoscalers living in zonal scope).

    Supporting Types

    AutoscalerStatusDetailsResponse, AutoscalerStatusDetailsResponseArgs

    Message string
    The status message.
    Type string
    The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions.
    Message string
    The status message.
    Type string
    The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions.
    message String
    The status message.
    type String
    The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions.
    message string
    The status message.
    type string
    The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions.
    message str
    The status message.
    type str
    The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions.
    message String
    The status message.
    type String
    The type of error, warning, or notice returned. Current set of possible values: - ALL_INSTANCES_UNHEALTHY (WARNING): All instances in the instance group are unhealthy (not in RUNNING state). - BACKEND_SERVICE_DOES_NOT_EXIST (ERROR): There is no backend service attached to the instance group. - CAPPED_AT_MAX_NUM_REPLICAS (WARNING): Autoscaler recommends a size greater than maxNumReplicas. - CUSTOM_METRIC_DATA_POINTS_TOO_SPARSE (WARNING): The custom metric samples are not exported often enough to be a credible base for autoscaling. - CUSTOM_METRIC_INVALID (ERROR): The custom metric that was specified does not exist or does not have the necessary labels. - MIN_EQUALS_MAX (WARNING): The minNumReplicas is equal to maxNumReplicas. This means the autoscaler cannot add or remove instances from the instance group. - MISSING_CUSTOM_METRIC_DATA_POINTS (WARNING): The autoscaler did not receive any data from the custom metric configured for autoscaling. - MISSING_LOAD_BALANCING_DATA_POINTS (WARNING): The autoscaler is configured to scale based on a load balancing signal but the instance group has not received any requests from the load balancer. - MODE_OFF (WARNING): Autoscaling is turned off. The number of instances in the group won't change automatically. The autoscaling configuration is preserved. - MODE_ONLY_UP (WARNING): Autoscaling is in the "Autoscale only out" mode. The autoscaler can add instances but not remove any. - MORE_THAN_ONE_BACKEND_SERVICE (ERROR): The instance group cannot be autoscaled because it has more than one backend service attached to it. - NOT_ENOUGH_QUOTA_AVAILABLE (ERROR): There is insufficient quota for the necessary resources, such as CPU or number of instances. - REGION_RESOURCE_STOCKOUT (ERROR): Shown only for regional autoscalers: there is a resource stockout in the chosen region. - SCALING_TARGET_DOES_NOT_EXIST (ERROR): The target to be scaled does not exist. - UNSUPPORTED_MAX_RATE_LOAD_BALANCING_CONFIGURATION (ERROR): Autoscaling does not work with an HTTP/S load balancer that has been configured for maxRate. - ZONE_RESOURCE_STOCKOUT (ERROR): For zonal autoscalers: there is a resource stockout in the chosen zone. For regional autoscalers: in at least one of the zones you're using there is a resource stockout. New values might be added in the future. Some of the values might not be available in all API versions.

    AutoscalingPolicy, AutoscalingPolicyArgs

    CoolDownPeriodSec int
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    CpuUtilization Pulumi.GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyCpuUtilization
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    CustomMetricUtilizations List<Pulumi.GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyCustomMetricUtilization>
    Configuration parameters of autoscaling based on a custom metric.
    LoadBalancingUtilization Pulumi.GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyLoadBalancingUtilization
    Configuration parameters of autoscaling based on load balancer.
    MaxNumReplicas int
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    MinNumReplicas int
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    Mode Pulumi.GoogleNative.Compute.Beta.AutoscalingPolicyMode
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    ScaleDownControl Pulumi.GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyScaleDownControl
    ScaleInControl Pulumi.GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyScaleInControl
    ScalingSchedules Dictionary<string, string>
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.
    CoolDownPeriodSec int
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    CpuUtilization AutoscalingPolicyCpuUtilization
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    CustomMetricUtilizations []AutoscalingPolicyCustomMetricUtilization
    Configuration parameters of autoscaling based on a custom metric.
    LoadBalancingUtilization AutoscalingPolicyLoadBalancingUtilization
    Configuration parameters of autoscaling based on load balancer.
    MaxNumReplicas int
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    MinNumReplicas int
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    Mode AutoscalingPolicyMode
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    ScaleDownControl AutoscalingPolicyScaleDownControl
    ScaleInControl AutoscalingPolicyScaleInControl
    ScalingSchedules map[string]string
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.
    coolDownPeriodSec Integer
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    cpuUtilization AutoscalingPolicyCpuUtilization
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    customMetricUtilizations List<AutoscalingPolicyCustomMetricUtilization>
    Configuration parameters of autoscaling based on a custom metric.
    loadBalancingUtilization AutoscalingPolicyLoadBalancingUtilization
    Configuration parameters of autoscaling based on load balancer.
    maxNumReplicas Integer
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    minNumReplicas Integer
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    mode AutoscalingPolicyMode
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    scaleDownControl AutoscalingPolicyScaleDownControl
    scaleInControl AutoscalingPolicyScaleInControl
    scalingSchedules Map<String,String>
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.
    coolDownPeriodSec number
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    cpuUtilization AutoscalingPolicyCpuUtilization
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    customMetricUtilizations AutoscalingPolicyCustomMetricUtilization[]
    Configuration parameters of autoscaling based on a custom metric.
    loadBalancingUtilization AutoscalingPolicyLoadBalancingUtilization
    Configuration parameters of autoscaling based on load balancer.
    maxNumReplicas number
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    minNumReplicas number
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    mode AutoscalingPolicyMode
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    scaleDownControl AutoscalingPolicyScaleDownControl
    scaleInControl AutoscalingPolicyScaleInControl
    scalingSchedules {[key: string]: string}
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.
    cool_down_period_sec int
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    cpu_utilization AutoscalingPolicyCpuUtilization
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    custom_metric_utilizations Sequence[AutoscalingPolicyCustomMetricUtilization]
    Configuration parameters of autoscaling based on a custom metric.
    load_balancing_utilization AutoscalingPolicyLoadBalancingUtilization
    Configuration parameters of autoscaling based on load balancer.
    max_num_replicas int
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    min_num_replicas int
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    mode AutoscalingPolicyMode
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    scale_down_control AutoscalingPolicyScaleDownControl
    scale_in_control AutoscalingPolicyScaleInControl
    scaling_schedules Mapping[str, str]
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.
    coolDownPeriodSec Number
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    cpuUtilization Property Map
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    customMetricUtilizations List<Property Map>
    Configuration parameters of autoscaling based on a custom metric.
    loadBalancingUtilization Property Map
    Configuration parameters of autoscaling based on load balancer.
    maxNumReplicas Number
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    minNumReplicas Number
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    mode "OFF" | "ON" | "ONLY_SCALE_OUT" | "ONLY_UP"
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    scaleDownControl Property Map
    scaleInControl Property Map
    scalingSchedules Map<String>
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.

    AutoscalingPolicyCpuUtilization, AutoscalingPolicyCpuUtilizationArgs

    PredictiveMethod Pulumi.GoogleNative.Compute.Beta.AutoscalingPolicyCpuUtilizationPredictiveMethod
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    UtilizationTarget double
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
    PredictiveMethod AutoscalingPolicyCpuUtilizationPredictiveMethod
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    UtilizationTarget float64
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
    predictiveMethod AutoscalingPolicyCpuUtilizationPredictiveMethod
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    utilizationTarget Double
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
    predictiveMethod AutoscalingPolicyCpuUtilizationPredictiveMethod
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    utilizationTarget number
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
    predictive_method AutoscalingPolicyCpuUtilizationPredictiveMethod
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    utilization_target float
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
    predictiveMethod "NONE" | "OPTIMIZE_AVAILABILITY" | "PREDICTIVE_METHOD_UNSPECIFIED"
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    utilizationTarget Number
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.

    AutoscalingPolicyCpuUtilizationPredictiveMethod, AutoscalingPolicyCpuUtilizationPredictiveMethodArgs

    None
    NONENo predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics
    OptimizeAvailability
    OPTIMIZE_AVAILABILITYPredictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    PredictiveMethodUnspecified
    PREDICTIVE_METHOD_UNSPECIFIED
    AutoscalingPolicyCpuUtilizationPredictiveMethodNone
    NONENo predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics
    AutoscalingPolicyCpuUtilizationPredictiveMethodOptimizeAvailability
    OPTIMIZE_AVAILABILITYPredictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    AutoscalingPolicyCpuUtilizationPredictiveMethodPredictiveMethodUnspecified
    PREDICTIVE_METHOD_UNSPECIFIED
    None
    NONENo predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics
    OptimizeAvailability
    OPTIMIZE_AVAILABILITYPredictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    PredictiveMethodUnspecified
    PREDICTIVE_METHOD_UNSPECIFIED
    None
    NONENo predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics
    OptimizeAvailability
    OPTIMIZE_AVAILABILITYPredictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    PredictiveMethodUnspecified
    PREDICTIVE_METHOD_UNSPECIFIED
    NONE
    NONENo predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics
    OPTIMIZE_AVAILABILITY
    OPTIMIZE_AVAILABILITYPredictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    PREDICTIVE_METHOD_UNSPECIFIED
    PREDICTIVE_METHOD_UNSPECIFIED
    "NONE"
    NONENo predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics
    "OPTIMIZE_AVAILABILITY"
    OPTIMIZE_AVAILABILITYPredictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    "PREDICTIVE_METHOD_UNSPECIFIED"
    PREDICTIVE_METHOD_UNSPECIFIED

    AutoscalingPolicyCpuUtilizationResponse, AutoscalingPolicyCpuUtilizationResponseArgs

    PredictiveMethod string
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    UtilizationTarget double
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
    PredictiveMethod string
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    UtilizationTarget float64
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
    predictiveMethod String
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    utilizationTarget Double
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
    predictiveMethod string
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    utilizationTarget number
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
    predictive_method str
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    utilization_target float
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.
    predictiveMethod String
    Indicates whether predictive autoscaling based on CPU metric is enabled. Valid values are: * NONE (default). No predictive method is used. The autoscaler scales the group to meet current demand based on real-time metrics. * OPTIMIZE_AVAILABILITY. Predictive autoscaling improves availability by monitoring daily and weekly load patterns and scaling out ahead of anticipated demand.
    utilizationTarget Number
    The target CPU utilization that the autoscaler maintains. Must be a float value in the range (0, 1]. If not specified, the default is 0.6. If the CPU level is below the target utilization, the autoscaler scales in the number of instances until it reaches the minimum number of instances you specified or until the average CPU of your instances reaches the target utilization. If the average CPU is above the target utilization, the autoscaler scales out until it reaches the maximum number of instances you specified or until the average utilization reaches the target utilization.

    AutoscalingPolicyCustomMetricUtilization, AutoscalingPolicyCustomMetricUtilizationArgs

    Filter string
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    Metric string
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    SingleInstanceAssignment double
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    UtilizationTarget double
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    UtilizationTargetType Pulumi.GoogleNative.Compute.Beta.AutoscalingPolicyCustomMetricUtilizationUtilizationTargetType
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.
    Filter string
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    Metric string
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    SingleInstanceAssignment float64
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    UtilizationTarget float64
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    UtilizationTargetType AutoscalingPolicyCustomMetricUtilizationUtilizationTargetType
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.
    filter String
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    metric String
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    singleInstanceAssignment Double
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    utilizationTarget Double
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    utilizationTargetType AutoscalingPolicyCustomMetricUtilizationUtilizationTargetType
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.
    filter string
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    metric string
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    singleInstanceAssignment number
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    utilizationTarget number
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    utilizationTargetType AutoscalingPolicyCustomMetricUtilizationUtilizationTargetType
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.
    filter str
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    metric str
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    single_instance_assignment float
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    utilization_target float
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    utilization_target_type AutoscalingPolicyCustomMetricUtilizationUtilizationTargetType
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.
    filter String
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    metric String
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    singleInstanceAssignment Number
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    utilizationTarget Number
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    utilizationTargetType "DELTA_PER_MINUTE" | "DELTA_PER_SECOND" | "GAUGE"
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.

    AutoscalingPolicyCustomMetricUtilizationResponse, AutoscalingPolicyCustomMetricUtilizationResponseArgs

    Filter string
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    Metric string
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    SingleInstanceAssignment double
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    UtilizationTarget double
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    UtilizationTargetType string
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.
    Filter string
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    Metric string
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    SingleInstanceAssignment float64
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    UtilizationTarget float64
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    UtilizationTargetType string
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.
    filter String
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    metric String
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    singleInstanceAssignment Double
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    utilizationTarget Double
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    utilizationTargetType String
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.
    filter string
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    metric string
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    singleInstanceAssignment number
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    utilizationTarget number
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    utilizationTargetType string
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.
    filter str
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    metric str
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    single_instance_assignment float
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    utilization_target float
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    utilization_target_type str
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.
    filter String
    A filter string, compatible with a Stackdriver Monitoring filter string for TimeSeries.list API call. This filter is used to select a specific TimeSeries for the purpose of autoscaling and to determine whether the metric is exporting per-instance or per-group data. For the filter to be valid for autoscaling purposes, the following rules apply: - You can only use the AND operator for joining selectors. - You can only use direct equality comparison operator (=) without any functions for each selector. - You can specify the metric in both the filter string and in the metric field. However, if specified in both places, the metric must be identical. - The monitored resource type determines what kind of values are expected for the metric. If it is a gce_instance, the autoscaler expects the metric to include a separate TimeSeries for each instance in a group. In such a case, you cannot filter on resource labels. If the resource type is any other value, the autoscaler expects this metric to contain values that apply to the entire autoscaled instance group and resource label filtering can be performed to point autoscaler at the correct TimeSeries to scale upon. This is called a per-group metric for the purpose of autoscaling. If not specified, the type defaults to gce_instance. Try to provide a filter that is selective enough to pick just one TimeSeries for the autoscaled group or for each of the instances (if you are using gce_instance resource type). If multiple TimeSeries are returned upon the query execution, the autoscaler will sum their respective values to obtain its scaling value.
    metric String
    The identifier (type) of the Stackdriver Monitoring metric. The metric cannot have negative values. The metric must have a value type of INT64 or DOUBLE.
    singleInstanceAssignment Number
    If scaling is based on a per-group metric value that represents the total amount of work to be done or resource usage, set this value to an amount assigned for a single instance of the scaled group. Autoscaler keeps the number of instances proportional to the value of this metric. The metric itself does not change value due to group resizing. A good metric to use with the target is for example pubsub.googleapis.com/subscription/num_undelivered_messages or a custom metric exporting the total number of requests coming to your instances. A bad example would be a metric exporting an average or median latency, since this value can't include a chunk assignable to a single instance, it could be better used with utilization_target instead.
    utilizationTarget Number
    The target value of the metric that autoscaler maintains. This must be a positive value. A utilization metric scales number of virtual machines handling requests to increase or decrease proportionally to the metric. For example, a good metric to use as a utilization_target is https://www.googleapis.com/compute/v1/instance/network/received_bytes_count. The autoscaler works to keep this value constant for each of the instances.
    utilizationTargetType String
    Defines how target utilization value is expressed for a Stackdriver Monitoring metric. Either GAUGE, DELTA_PER_SECOND, or DELTA_PER_MINUTE.

    AutoscalingPolicyCustomMetricUtilizationUtilizationTargetType, AutoscalingPolicyCustomMetricUtilizationUtilizationTargetTypeArgs

    DeltaPerMinute
    DELTA_PER_MINUTESets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per minute.
    DeltaPerSecond
    DELTA_PER_SECONDSets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per second.
    Gauge
    GAUGESets the utilization target value for a gauge metric. The autoscaler will collect the average utilization of the virtual machines from the last couple of minutes, and compare the value to the utilization target value to perform autoscaling.
    AutoscalingPolicyCustomMetricUtilizationUtilizationTargetTypeDeltaPerMinute
    DELTA_PER_MINUTESets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per minute.
    AutoscalingPolicyCustomMetricUtilizationUtilizationTargetTypeDeltaPerSecond
    DELTA_PER_SECONDSets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per second.
    AutoscalingPolicyCustomMetricUtilizationUtilizationTargetTypeGauge
    GAUGESets the utilization target value for a gauge metric. The autoscaler will collect the average utilization of the virtual machines from the last couple of minutes, and compare the value to the utilization target value to perform autoscaling.
    DeltaPerMinute
    DELTA_PER_MINUTESets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per minute.
    DeltaPerSecond
    DELTA_PER_SECONDSets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per second.
    Gauge
    GAUGESets the utilization target value for a gauge metric. The autoscaler will collect the average utilization of the virtual machines from the last couple of minutes, and compare the value to the utilization target value to perform autoscaling.
    DeltaPerMinute
    DELTA_PER_MINUTESets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per minute.
    DeltaPerSecond
    DELTA_PER_SECONDSets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per second.
    Gauge
    GAUGESets the utilization target value for a gauge metric. The autoscaler will collect the average utilization of the virtual machines from the last couple of minutes, and compare the value to the utilization target value to perform autoscaling.
    DELTA_PER_MINUTE
    DELTA_PER_MINUTESets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per minute.
    DELTA_PER_SECOND
    DELTA_PER_SECONDSets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per second.
    GAUGE
    GAUGESets the utilization target value for a gauge metric. The autoscaler will collect the average utilization of the virtual machines from the last couple of minutes, and compare the value to the utilization target value to perform autoscaling.
    "DELTA_PER_MINUTE"
    DELTA_PER_MINUTESets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per minute.
    "DELTA_PER_SECOND"
    DELTA_PER_SECONDSets the utilization target value for a cumulative or delta metric, expressed as the rate of growth per second.
    "GAUGE"
    GAUGESets the utilization target value for a gauge metric. The autoscaler will collect the average utilization of the virtual machines from the last couple of minutes, and compare the value to the utilization target value to perform autoscaling.

    AutoscalingPolicyLoadBalancingUtilization, AutoscalingPolicyLoadBalancingUtilizationArgs

    UtilizationTarget double
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.
    UtilizationTarget float64
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.
    utilizationTarget Double
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.
    utilizationTarget number
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.
    utilization_target float
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.
    utilizationTarget Number
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.

    AutoscalingPolicyLoadBalancingUtilizationResponse, AutoscalingPolicyLoadBalancingUtilizationResponseArgs

    UtilizationTarget double
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.
    UtilizationTarget float64
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.
    utilizationTarget Double
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.
    utilizationTarget number
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.
    utilization_target float
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.
    utilizationTarget Number
    Fraction of backend capacity utilization (set in HTTP(S) load balancing configuration) that the autoscaler maintains. Must be a positive float value. If not defined, the default is 0.8.

    AutoscalingPolicyMode, AutoscalingPolicyModeArgs

    Off
    OFFDo not automatically scale the MIG in or out. The recommended_size field contains the size of MIG that would be set if the actuation mode was enabled.
    On
    ONAutomatically scale the MIG in and out according to the policy.
    OnlyScaleOut
    ONLY_SCALE_OUTAutomatically create VMs according to the policy, but do not scale the MIG in.
    OnlyUp
    ONLY_UPAutomatically create VMs according to the policy, but do not scale the MIG in.
    AutoscalingPolicyModeOff
    OFFDo not automatically scale the MIG in or out. The recommended_size field contains the size of MIG that would be set if the actuation mode was enabled.
    AutoscalingPolicyModeOn
    ONAutomatically scale the MIG in and out according to the policy.
    AutoscalingPolicyModeOnlyScaleOut
    ONLY_SCALE_OUTAutomatically create VMs according to the policy, but do not scale the MIG in.
    AutoscalingPolicyModeOnlyUp
    ONLY_UPAutomatically create VMs according to the policy, but do not scale the MIG in.
    Off
    OFFDo not automatically scale the MIG in or out. The recommended_size field contains the size of MIG that would be set if the actuation mode was enabled.
    On
    ONAutomatically scale the MIG in and out according to the policy.
    OnlyScaleOut
    ONLY_SCALE_OUTAutomatically create VMs according to the policy, but do not scale the MIG in.
    OnlyUp
    ONLY_UPAutomatically create VMs according to the policy, but do not scale the MIG in.
    Off
    OFFDo not automatically scale the MIG in or out. The recommended_size field contains the size of MIG that would be set if the actuation mode was enabled.
    On
    ONAutomatically scale the MIG in and out according to the policy.
    OnlyScaleOut
    ONLY_SCALE_OUTAutomatically create VMs according to the policy, but do not scale the MIG in.
    OnlyUp
    ONLY_UPAutomatically create VMs according to the policy, but do not scale the MIG in.
    OFF
    OFFDo not automatically scale the MIG in or out. The recommended_size field contains the size of MIG that would be set if the actuation mode was enabled.
    ON
    ONAutomatically scale the MIG in and out according to the policy.
    ONLY_SCALE_OUT
    ONLY_SCALE_OUTAutomatically create VMs according to the policy, but do not scale the MIG in.
    ONLY_UP
    ONLY_UPAutomatically create VMs according to the policy, but do not scale the MIG in.
    "OFF"
    OFFDo not automatically scale the MIG in or out. The recommended_size field contains the size of MIG that would be set if the actuation mode was enabled.
    "ON"
    ONAutomatically scale the MIG in and out according to the policy.
    "ONLY_SCALE_OUT"
    ONLY_SCALE_OUTAutomatically create VMs according to the policy, but do not scale the MIG in.
    "ONLY_UP"
    ONLY_UPAutomatically create VMs according to the policy, but do not scale the MIG in.

    AutoscalingPolicyResponse, AutoscalingPolicyResponseArgs

    CoolDownPeriodSec int
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    CpuUtilization Pulumi.GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyCpuUtilizationResponse
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    CustomMetricUtilizations List<Pulumi.GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyCustomMetricUtilizationResponse>
    Configuration parameters of autoscaling based on a custom metric.
    LoadBalancingUtilization Pulumi.GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyLoadBalancingUtilizationResponse
    Configuration parameters of autoscaling based on load balancer.
    MaxNumReplicas int
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    MinNumReplicas int
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    Mode string
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    ScaleDownControl Pulumi.GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyScaleDownControlResponse
    ScaleInControl Pulumi.GoogleNative.Compute.Beta.Inputs.AutoscalingPolicyScaleInControlResponse
    ScalingSchedules Dictionary<string, string>
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.
    CoolDownPeriodSec int
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    CpuUtilization AutoscalingPolicyCpuUtilizationResponse
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    CustomMetricUtilizations []AutoscalingPolicyCustomMetricUtilizationResponse
    Configuration parameters of autoscaling based on a custom metric.
    LoadBalancingUtilization AutoscalingPolicyLoadBalancingUtilizationResponse
    Configuration parameters of autoscaling based on load balancer.
    MaxNumReplicas int
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    MinNumReplicas int
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    Mode string
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    ScaleDownControl AutoscalingPolicyScaleDownControlResponse
    ScaleInControl AutoscalingPolicyScaleInControlResponse
    ScalingSchedules map[string]string
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.
    coolDownPeriodSec Integer
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    cpuUtilization AutoscalingPolicyCpuUtilizationResponse
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    customMetricUtilizations List<AutoscalingPolicyCustomMetricUtilizationResponse>
    Configuration parameters of autoscaling based on a custom metric.
    loadBalancingUtilization AutoscalingPolicyLoadBalancingUtilizationResponse
    Configuration parameters of autoscaling based on load balancer.
    maxNumReplicas Integer
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    minNumReplicas Integer
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    mode String
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    scaleDownControl AutoscalingPolicyScaleDownControlResponse
    scaleInControl AutoscalingPolicyScaleInControlResponse
    scalingSchedules Map<String,String>
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.
    coolDownPeriodSec number
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    cpuUtilization AutoscalingPolicyCpuUtilizationResponse
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    customMetricUtilizations AutoscalingPolicyCustomMetricUtilizationResponse[]
    Configuration parameters of autoscaling based on a custom metric.
    loadBalancingUtilization AutoscalingPolicyLoadBalancingUtilizationResponse
    Configuration parameters of autoscaling based on load balancer.
    maxNumReplicas number
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    minNumReplicas number
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    mode string
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    scaleDownControl AutoscalingPolicyScaleDownControlResponse
    scaleInControl AutoscalingPolicyScaleInControlResponse
    scalingSchedules {[key: string]: string}
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.
    cool_down_period_sec int
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    cpu_utilization AutoscalingPolicyCpuUtilizationResponse
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    custom_metric_utilizations Sequence[AutoscalingPolicyCustomMetricUtilizationResponse]
    Configuration parameters of autoscaling based on a custom metric.
    load_balancing_utilization AutoscalingPolicyLoadBalancingUtilizationResponse
    Configuration parameters of autoscaling based on load balancer.
    max_num_replicas int
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    min_num_replicas int
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    mode str
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    scale_down_control AutoscalingPolicyScaleDownControlResponse
    scale_in_control AutoscalingPolicyScaleInControlResponse
    scaling_schedules Mapping[str, str]
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.
    coolDownPeriodSec Number
    The number of seconds that your application takes to initialize on a VM instance. This is referred to as the initialization period. Specifying an accurate initialization period improves autoscaler decisions. For example, when scaling out, the autoscaler ignores data from VMs that are still initializing because those VMs might not yet represent normal usage of your application. The default initialization period is 60 seconds. Initialization periods might vary because of numerous factors. We recommend that you test how long your application takes to initialize. To do this, create a VM and time your application's startup process.
    cpuUtilization Property Map
    Defines the CPU utilization policy that allows the autoscaler to scale based on the average CPU utilization of a managed instance group.
    customMetricUtilizations List<Property Map>
    Configuration parameters of autoscaling based on a custom metric.
    loadBalancingUtilization Property Map
    Configuration parameters of autoscaling based on load balancer.
    maxNumReplicas Number
    The maximum number of instances that the autoscaler can scale out to. This is required when creating or updating an autoscaler. The maximum number of replicas must not be lower than minimal number of replicas.
    minNumReplicas Number
    The minimum number of replicas that the autoscaler can scale in to. This cannot be less than 0. If not provided, autoscaler chooses a default value depending on maximum number of instances allowed.
    mode String
    Defines the operating mode for this policy. The following modes are available: - OFF: Disables the autoscaler but maintains its configuration. - ONLY_SCALE_OUT: Restricts the autoscaler to add VM instances only. - ON: Enables all autoscaler activities according to its policy. For more information, see "Turning off or restricting an autoscaler"
    scaleDownControl Property Map
    scaleInControl Property Map
    scalingSchedules Map<String>
    Scaling schedules defined for an autoscaler. Multiple schedules can be set on an autoscaler, and they can overlap. During overlapping periods the greatest min_required_replicas of all scaling schedules is applied. Up to 128 scaling schedules are allowed.

    AutoscalingPolicyScaleDownControl, AutoscalingPolicyScaleDownControlArgs

    MaxScaledDownReplicas Pulumi.GoogleNative.Compute.Beta.Inputs.FixedOrPercent
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    TimeWindowSec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    MaxScaledDownReplicas FixedOrPercent
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    TimeWindowSec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledDownReplicas FixedOrPercent
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec Integer
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledDownReplicas FixedOrPercent
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec number
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    max_scaled_down_replicas FixedOrPercent
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    time_window_sec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledDownReplicas Property Map
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec Number
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.

    AutoscalingPolicyScaleDownControlResponse, AutoscalingPolicyScaleDownControlResponseArgs

    MaxScaledDownReplicas Pulumi.GoogleNative.Compute.Beta.Inputs.FixedOrPercentResponse
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    TimeWindowSec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    MaxScaledDownReplicas FixedOrPercentResponse
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    TimeWindowSec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledDownReplicas FixedOrPercentResponse
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec Integer
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledDownReplicas FixedOrPercentResponse
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec number
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    max_scaled_down_replicas FixedOrPercentResponse
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    time_window_sec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledDownReplicas Property Map
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec Number
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.

    AutoscalingPolicyScaleInControl, AutoscalingPolicyScaleInControlArgs

    MaxScaledInReplicas Pulumi.GoogleNative.Compute.Beta.Inputs.FixedOrPercent
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    TimeWindowSec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    MaxScaledInReplicas FixedOrPercent
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    TimeWindowSec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledInReplicas FixedOrPercent
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec Integer
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledInReplicas FixedOrPercent
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec number
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    max_scaled_in_replicas FixedOrPercent
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    time_window_sec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledInReplicas Property Map
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec Number
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.

    AutoscalingPolicyScaleInControlResponse, AutoscalingPolicyScaleInControlResponseArgs

    MaxScaledInReplicas Pulumi.GoogleNative.Compute.Beta.Inputs.FixedOrPercentResponse
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    TimeWindowSec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    MaxScaledInReplicas FixedOrPercentResponse
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    TimeWindowSec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledInReplicas FixedOrPercentResponse
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec Integer
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledInReplicas FixedOrPercentResponse
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec number
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    max_scaled_in_replicas FixedOrPercentResponse
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    time_window_sec int
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.
    maxScaledInReplicas Property Map
    Maximum allowed number (or %) of VMs that can be deducted from the peak recommendation during the window autoscaler looks at when computing recommendations. Possibly all these VMs can be deleted at once so user service needs to be prepared to lose that many VMs in one step.
    timeWindowSec Number
    How far back autoscaling looks when computing recommendations to include directives regarding slower scale in, as described above.

    FixedOrPercent, FixedOrPercentArgs

    Fixed int
    Specifies a fixed number of VM instances. This must be a positive integer.
    Percent int
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.
    Fixed int
    Specifies a fixed number of VM instances. This must be a positive integer.
    Percent int
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.
    fixed Integer
    Specifies a fixed number of VM instances. This must be a positive integer.
    percent Integer
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.
    fixed number
    Specifies a fixed number of VM instances. This must be a positive integer.
    percent number
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.
    fixed int
    Specifies a fixed number of VM instances. This must be a positive integer.
    percent int
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.
    fixed Number
    Specifies a fixed number of VM instances. This must be a positive integer.
    percent Number
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.

    FixedOrPercentResponse, FixedOrPercentResponseArgs

    Calculated int
    Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded.
    Fixed int
    Specifies a fixed number of VM instances. This must be a positive integer.
    Percent int
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.
    Calculated int
    Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded.
    Fixed int
    Specifies a fixed number of VM instances. This must be a positive integer.
    Percent int
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.
    calculated Integer
    Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded.
    fixed Integer
    Specifies a fixed number of VM instances. This must be a positive integer.
    percent Integer
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.
    calculated number
    Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded.
    fixed number
    Specifies a fixed number of VM instances. This must be a positive integer.
    percent number
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.
    calculated int
    Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded.
    fixed int
    Specifies a fixed number of VM instances. This must be a positive integer.
    percent int
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.
    calculated Number
    Absolute value of VM instances calculated based on the specific mode. - If the value is fixed, then the calculated value is equal to the fixed value. - If the value is a percent, then the calculated value is percent/100 * targetSize. For example, the calculated value of a 80% of a managed instance group with 150 instances would be (80/100 * 150) = 120 VM instances. If there is a remainder, the number is rounded.
    fixed Number
    Specifies a fixed number of VM instances. This must be a positive integer.
    percent Number
    Specifies a percentage of instances between 0 to 100%, inclusive. For example, specify 80 for 80%.

    Package Details

    Repository
    Google Cloud Native pulumi/pulumi-google-native
    License
    Apache-2.0
    google-native logo

    Google Cloud Native is in preview. Google Cloud Classic is fully supported.

    Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi