Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
google-native.datalabeling/v1beta1.getEvaluationJob
Explore with Pulumi AI
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi
Gets an evaluation job by resource name.
Using getEvaluationJob
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getEvaluationJob(args: GetEvaluationJobArgs, opts?: InvokeOptions): Promise<GetEvaluationJobResult>
function getEvaluationJobOutput(args: GetEvaluationJobOutputArgs, opts?: InvokeOptions): Output<GetEvaluationJobResult>
def get_evaluation_job(evaluation_job_id: Optional[str] = None,
project: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetEvaluationJobResult
def get_evaluation_job_output(evaluation_job_id: Optional[pulumi.Input[str]] = None,
project: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetEvaluationJobResult]
func LookupEvaluationJob(ctx *Context, args *LookupEvaluationJobArgs, opts ...InvokeOption) (*LookupEvaluationJobResult, error)
func LookupEvaluationJobOutput(ctx *Context, args *LookupEvaluationJobOutputArgs, opts ...InvokeOption) LookupEvaluationJobResultOutput
> Note: This function is named LookupEvaluationJob
in the Go SDK.
public static class GetEvaluationJob
{
public static Task<GetEvaluationJobResult> InvokeAsync(GetEvaluationJobArgs args, InvokeOptions? opts = null)
public static Output<GetEvaluationJobResult> Invoke(GetEvaluationJobInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetEvaluationJobResult> getEvaluationJob(GetEvaluationJobArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
fn::invoke:
function: google-native:datalabeling/v1beta1:getEvaluationJob
arguments:
# arguments dictionary
The following arguments are supported:
- Evaluation
Job stringId - Project string
- Evaluation
Job stringId - Project string
- evaluation
Job StringId - project String
- evaluation
Job stringId - project string
- evaluation_
job_ strid - project str
- evaluation
Job StringId - project String
getEvaluationJob Result
The following output properties are available:
- Annotation
Spec stringSet - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- Attempts
List<Pulumi.
Google Native. Data Labeling. V1Beta1. Outputs. Google Cloud Datalabeling V1beta1Attempt Response> - Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- Create
Time string - Timestamp of when this evaluation job was created.
- Description string
- Description of the job. The description can be up to 25,000 characters long.
- Evaluation
Job Pulumi.Config Google Native. Data Labeling. V1Beta1. Outputs. Google Cloud Datalabeling V1beta1Evaluation Job Config Response - Configuration details for the evaluation job.
- Label
Missing boolGround Truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - Model
Version string - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- Name string
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- Schedule string
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- State string
- Describes the current state of the job.
- Annotation
Spec stringSet - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- Attempts
[]Google
Cloud Datalabeling V1beta1Attempt Response - Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- Create
Time string - Timestamp of when this evaluation job was created.
- Description string
- Description of the job. The description can be up to 25,000 characters long.
- Evaluation
Job GoogleConfig Cloud Datalabeling V1beta1Evaluation Job Config Response - Configuration details for the evaluation job.
- Label
Missing boolGround Truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - Model
Version string - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- Name string
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- Schedule string
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- State string
- Describes the current state of the job.
- annotation
Spec StringSet - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- attempts
List<Google
Cloud Datalabeling V1beta1Attempt Response> - Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create
Time String - Timestamp of when this evaluation job was created.
- description String
- Description of the job. The description can be up to 25,000 characters long.
- evaluation
Job GoogleConfig Cloud Datalabeling V1beta1Evaluation Job Config Response - Configuration details for the evaluation job.
- label
Missing BooleanGround Truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - model
Version String - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- name String
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- schedule String
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- state String
- Describes the current state of the job.
- annotation
Spec stringSet - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- attempts
Google
Cloud Datalabeling V1beta1Attempt Response[] - Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create
Time string - Timestamp of when this evaluation job was created.
- description string
- Description of the job. The description can be up to 25,000 characters long.
- evaluation
Job GoogleConfig Cloud Datalabeling V1beta1Evaluation Job Config Response - Configuration details for the evaluation job.
- label
Missing booleanGround Truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - model
Version string - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- name string
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- schedule string
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- state string
- Describes the current state of the job.
- annotation_
spec_ strset - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- attempts
Sequence[Google
Cloud Datalabeling V1beta1Attempt Response] - Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create_
time str - Timestamp of when this evaluation job was created.
- description str
- Description of the job. The description can be up to 25,000 characters long.
- evaluation_
job_ Googleconfig Cloud Datalabeling V1beta1Evaluation Job Config Response - Configuration details for the evaluation job.
- label_
missing_ boolground_ truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - model_
version str - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- name str
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- schedule str
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- state str
- Describes the current state of the job.
- annotation
Spec StringSet - Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
- attempts List<Property Map>
- Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
- create
Time String - Timestamp of when this evaluation job was created.
- description String
- Description of the job. The description can be up to 25,000 characters long.
- evaluation
Job Property MapConfig - Configuration details for the evaluation job.
- label
Missing BooleanGround Truth - Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to
true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse
. - model
Version String - The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
- name String
- After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/ {evaluation_job_id}"
- schedule String
- Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
- state String
- Describes the current state of the job.
Supporting Types
GoogleCloudDatalabelingV1beta1AttemptResponse
- Attempt
Time string - Partial
Failures List<Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Rpc Status Response> - Details of errors that occurred.
- Attempt
Time string - Partial
Failures []GoogleRpc Status Response - Details of errors that occurred.
- attempt
Time String - partial
Failures List<GoogleRpc Status Response> - Details of errors that occurred.
- attempt
Time string - partial
Failures GoogleRpc Status Response[] - Details of errors that occurred.
- attempt_
time str - partial_
failures Sequence[GoogleRpc Status Response] - Details of errors that occurred.
- attempt
Time String - partial
Failures List<Property Map> - Details of errors that occurred.
GoogleCloudDatalabelingV1beta1BigQuerySourceResponse
- Input
Uri string - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- Input
Uri string - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri String - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri string - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input_
uri str - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
- input
Uri String - BigQuery URI to a table, up to 2,000 characters long. If you specify the URI of a table that does not exist, Data Labeling Service creates a table at the URI with the correct schema when you create your EvaluationJob. If you specify the URI of a table that already exists, it must have the correct schema. Provide the table URI in the following format: "bq://{your_project_id}/ {your_dataset_name}/{your_table_name}" Learn more.
GoogleCloudDatalabelingV1beta1BoundingBoxEvaluationOptionsResponse
- Iou
Threshold double - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- Iou
Threshold float64 - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold Double - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold number - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou_
threshold float - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
- iou
Threshold Number - Minimum intersection-over-union (IOU) required for 2 bounding boxes to be considered a match. This must be a number between 0 and 1.
GoogleCloudDatalabelingV1beta1BoundingPolyConfigResponse
- Annotation
Spec stringSet - Annotation spec set resource name.
- Instruction
Message string - Optional. Instruction message showed on contributors UI.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Instruction
Message string - Optional. Instruction message showed on contributors UI.
- annotation
Spec StringSet - Annotation spec set resource name.
- instruction
Message String - Optional. Instruction message showed on contributors UI.
- annotation
Spec stringSet - Annotation spec set resource name.
- instruction
Message string - Optional. Instruction message showed on contributors UI.
- annotation_
spec_ strset - Annotation spec set resource name.
- instruction_
message str - Optional. Instruction message showed on contributors UI.
- annotation
Spec StringSet - Annotation spec set resource name.
- instruction
Message String - Optional. Instruction message showed on contributors UI.
GoogleCloudDatalabelingV1beta1ClassificationMetadataResponse
- Is
Multi boolLabel - Whether the classification task is multi-label or not.
- Is
Multi boolLabel - Whether the classification task is multi-label or not.
- is
Multi BooleanLabel - Whether the classification task is multi-label or not.
- is
Multi booleanLabel - Whether the classification task is multi-label or not.
- is_
multi_ boollabel - Whether the classification task is multi-label or not.
- is
Multi BooleanLabel - Whether the classification task is multi-label or not.
GoogleCloudDatalabelingV1beta1EvaluationConfigResponse
- Bounding
Box Pulumi.Evaluation Options Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- Bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box GoogleEvaluation Options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding_
box_ Googleevaluation_ options Cloud Datalabeling V1beta1Bounding Box Evaluation Options Response - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
- bounding
Box Property MapEvaluation Options - Only specify this field if the related model performs image object detection (
IMAGE_BOUNDING_BOX_ANNOTATION
). Describes how to evaluate bounding boxes.
GoogleCloudDatalabelingV1beta1EvaluationJobAlertConfigResponse
- Email string
- An email address to send alerts to.
- double
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- Email string
- An email address to send alerts to.
- float64
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email String
- An email address to send alerts to.
- Double
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email string
- An email address to send alerts to.
- number
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email str
- An email address to send alerts to.
- min_
acceptable_ floatmean_ average_ precision - A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
- email String
- An email address to send alerts to.
- Number
- A number between 0 and 1 that describes a minimum mean average precision threshold. When the evaluation job runs, if it calculates that your model version's predictions from the recent interval have meanAveragePrecision below this threshold, then it sends an alert to your specified email.
GoogleCloudDatalabelingV1beta1EvaluationJobConfigResponse
- Bigquery
Import Dictionary<string, string>Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - Bounding
Poly Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Bounding Poly Config Response - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - Evaluation
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Config Response - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - Evaluation
Job Pulumi.Alert Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Evaluation Job Alert Config Response - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- Example
Count int - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - Example
Sample doublePercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- Human
Annotation Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Human Annotation Config Response - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - Image
Classification Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Image Classification Config Response - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - Input
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Input Config Response - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - Text
Classification Pulumi.Config Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Text Classification Config Response - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- Bigquery
Import map[string]stringKeys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - Bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Response - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - Evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Response - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - Evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- Example
Count int - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - Example
Sample float64Percentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- Human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Response - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - Image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Response - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - Input
Config GoogleCloud Datalabeling V1beta1Input Config Response - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - Text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Response - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import Map<String,String>Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Response - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Response - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example
Count Integer - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example
Sample DoublePercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Response - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Response - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input
Config GoogleCloud Datalabeling V1beta1Input Config Response - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Response - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import {[key: string]: string}Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - bounding
Poly GoogleConfig Cloud Datalabeling V1beta1Bounding Poly Config Response - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation
Config GoogleCloud Datalabeling V1beta1Evaluation Config Response - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - evaluation
Job GoogleAlert Config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example
Count number - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example
Sample numberPercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human
Annotation GoogleConfig Cloud Datalabeling V1beta1Human Annotation Config Response - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image
Classification GoogleConfig Cloud Datalabeling V1beta1Image Classification Config Response - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input
Config GoogleCloud Datalabeling V1beta1Input Config Response - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text
Classification GoogleConfig Cloud Datalabeling V1beta1Text Classification Config Response - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery_
import_ Mapping[str, str]keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - bounding_
poly_ Googleconfig Cloud Datalabeling V1beta1Bounding Poly Config Response - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation_
config GoogleCloud Datalabeling V1beta1Evaluation Config Response - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - evaluation_
job_ Googlealert_ config Cloud Datalabeling V1beta1Evaluation Job Alert Config Response - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example_
count int - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example_
sample_ floatpercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human_
annotation_ Googleconfig Cloud Datalabeling V1beta1Human Annotation Config Response - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image_
classification_ Googleconfig Cloud Datalabeling V1beta1Image Classification Config Response - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input_
config GoogleCloud Datalabeling V1beta1Input Config Response - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text_
classification_ Googleconfig Cloud Datalabeling V1beta1Text Classification Config Response - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
- bigquery
Import Map<String>Keys - Prediction keys that tell Data Labeling Service where to find the data for evaluation in your BigQuery table. When the service samples prediction input and output from your model version and saves it to BigQuery, the data gets stored as JSON strings in the BigQuery table. These keys tell Data Labeling Service how to parse the JSON. You can provide the following entries in this field: *
data_json_key
: the data key for prediction input. You must provide either this key orreference_json_key
. *reference_json_key
: the data reference key for prediction input. You must provide either this key ordata_json_key
. *label_json_key
: the label key for prediction output. Required. *label_score_json_key
: the score key for prediction output. Required. *bounding_box_json_key
: the bounding box key for prediction output. Required if your model version perform image object detection. Learn how to configure prediction keys. - bounding
Poly Property MapConfig - Specify this field if your model version performs image object detection (bounding box detection).
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet. - evaluation
Config Property Map - Details for calculating evaluation metrics and creating Evaulations. If your model version performs image object detection, you must specify the
boundingBoxEvaluationOptions
field within this configuration. Otherwise, provide an empty object for this configuration. - evaluation
Job Property MapAlert Config - Optional. Configuration details for evaluation job alerts. Specify this field if you want to receive email alerts if the evaluation job finds that your predictions have low mean average precision during a run.
- example
Count Number - The maximum number of predictions to sample and save to BigQuery during each evaluation interval. This limit overrides
example_sample_percentage
: even if the service has not sampled enough predictions to fulfillexample_sample_perecentage
during an interval, it stops sampling predictions when it meets this limit. - example
Sample NumberPercentage - Fraction of predictions to sample and save to BigQuery during each evaluation interval. For example, 0.1 means 10% of predictions served by your model version get saved to BigQuery.
- human
Annotation Property MapConfig - Optional. Details for human annotation of your data. If you set labelMissingGroundTruth to
true
for this evaluation job, then you must specify this field. If you plan to provide your own ground truth labels, then omit this field. Note that you must create an Instruction resource before you can specify this field. Provide the name of the instruction resource in theinstruction
field within this configuration. - image
Classification Property MapConfig - Specify this field if your model version performs image classification or general classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config. - input
Config Property Map - Rquired. Details for the sampled prediction input. Within this configuration, there are requirements for several fields: *
dataType
must be one ofIMAGE
,TEXT
, orGENERAL_DATA
. *annotationType
must be one ofIMAGE_CLASSIFICATION_ANNOTATION
,TEXT_CLASSIFICATION_ANNOTATION
,GENERAL_CLASSIFICATION_ANNOTATION
, orIMAGE_BOUNDING_BOX_ANNOTATION
(image object detection). * If your machine learning model performs classification, you must specifyclassificationMetadata.isMultiLabel
. * You must specifybigquerySource
(notgcsSource
). - text
Classification Property MapConfig - Specify this field if your model version performs text classification.
annotationSpecSet
in this configuration must match EvaluationJob.annotationSpecSet.allowMultiLabel
in this configuration must matchclassificationMetadata.isMultiLabel
in input_config.
GoogleCloudDatalabelingV1beta1GcsSourceResponse
GoogleCloudDatalabelingV1beta1HumanAnnotationConfigResponse
- Annotated
Dataset stringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- Annotated
Dataset stringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- Contributor
Emails List<string> - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- Instruction string
- Instruction resource name.
- Label
Group string - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - Language
Code string - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- Question
Duration string - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- Replica
Count int - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- User
Email stringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- Annotated
Dataset stringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- Annotated
Dataset stringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- Contributor
Emails []string - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- Instruction string
- Instruction resource name.
- Label
Group string - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - Language
Code string - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- Question
Duration string - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- Replica
Count int - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- User
Email stringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset StringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated
Dataset StringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor
Emails List<String> - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction String
- Instruction resource name.
- label
Group String - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language
Code String - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration String - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count Integer - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email StringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset stringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated
Dataset stringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor
Emails string[] - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction string
- Instruction resource name.
- label
Group string - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language
Code string - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration string - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count number - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email stringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated_
dataset_ strdescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated_
dataset_ strdisplay_ name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor_
emails Sequence[str] - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction str
- Instruction resource name.
- label_
group str - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language_
code str - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question_
duration str - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica_
count int - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user_
email_ straddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
- annotated
Dataset StringDescription - Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
- annotated
Dataset StringDisplay Name - A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
- contributor
Emails List<String> - Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
- instruction String
- Instruction resource name.
- label
Group String - Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression
[a-zA-Z\\d_-]{0,128}
. - language
Code String - Optional. The Language of this question, as a BCP-47. Default value is en-US. Only need to set this when task is language related. For example, French text classification.
- question
Duration String - Optional. Maximum duration for contributors to answer a question. Maximum is 3600 seconds. Default is 3600 seconds.
- replica
Count Number - Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
- user
Email StringAddress - Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
GoogleCloudDatalabelingV1beta1ImageClassificationConfigResponse
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Answer
Aggregation stringType - Optional. The type of how to aggregate answers.
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Answer
Aggregation stringType - Optional. The type of how to aggregate answers.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation
Spec StringSet - Annotation spec set resource name.
- answer
Aggregation StringType - Optional. The type of how to aggregate answers.
- allow
Multi booleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation
Spec stringSet - Annotation spec set resource name.
- answer
Aggregation stringType - Optional. The type of how to aggregate answers.
- allow_
multi_ boollabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation_
spec_ strset - Annotation spec set resource name.
- answer_
aggregation_ strtype - Optional. The type of how to aggregate answers.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
- annotation
Spec StringSet - Annotation spec set resource name.
- answer
Aggregation StringType - Optional. The type of how to aggregate answers.
GoogleCloudDatalabelingV1beta1InputConfigResponse
- Annotation
Type string - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Bigquery
Source Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Big Query Source Response - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Classification
Metadata Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Classification Metadata Response - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- Data
Type string - Data type must be specifed when user tries to import data.
- Gcs
Source Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Gcs Source Response - Source located in Cloud Storage.
- Text
Metadata Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Text Metadata Response - Required for text import, as language code must be specified.
- Annotation
Type string - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Response - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- Classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- Data
Type string - Data type must be specifed when user tries to import data.
- Gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Response - Source located in Cloud Storage.
- Text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Response - Required for text import, as language code must be specified.
- annotation
Type String - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Response - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data
Type String - Data type must be specifed when user tries to import data.
- gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Response - Source located in Cloud Storage.
- text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Response - Required for text import, as language code must be specified.
- annotation
Type string - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source GoogleCloud Datalabeling V1beta1Big Query Source Response - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data
Type string - Data type must be specifed when user tries to import data.
- gcs
Source GoogleCloud Datalabeling V1beta1Gcs Source Response - Source located in Cloud Storage.
- text
Metadata GoogleCloud Datalabeling V1beta1Text Metadata Response - Required for text import, as language code must be specified.
- annotation_
type str - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery_
source GoogleCloud Datalabeling V1beta1Big Query Source Response - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification_
metadata GoogleCloud Datalabeling V1beta1Classification Metadata Response - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data_
type str - Data type must be specifed when user tries to import data.
- gcs_
source GoogleCloud Datalabeling V1beta1Gcs Source Response - Source located in Cloud Storage.
- text_
metadata GoogleCloud Datalabeling V1beta1Text Metadata Response - Required for text import, as language code must be specified.
- annotation
Type String - Optional. The type of annotation to be performed on this data. You must specify this field if you are using this InputConfig in an EvaluationJob.
- bigquery
Source Property Map - Source located in BigQuery. You must specify this field if you are using this InputConfig in an EvaluationJob.
- classification
Metadata Property Map - Optional. Metadata about annotations for the input. You must specify this field if you are using this InputConfig in an EvaluationJob for a model version that performs classification.
- data
Type String - Data type must be specifed when user tries to import data.
- gcs
Source Property Map - Source located in Cloud Storage.
- text
Metadata Property Map - Required for text import, as language code must be specified.
GoogleCloudDatalabelingV1beta1SentimentConfigResponse
- Enable
Label boolSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- Enable
Label boolSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label BooleanSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label booleanSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable_
label_ boolsentiment_ selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
- enable
Label BooleanSentiment Selection - If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
GoogleCloudDatalabelingV1beta1TextClassificationConfigResponse
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Sentiment
Config Pulumi.Google Native. Data Labeling. V1Beta1. Inputs. Google Cloud Datalabeling V1beta1Sentiment Config Response - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- Allow
Multi boolLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- Annotation
Spec stringSet - Annotation spec set resource name.
- Sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Response - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation
Spec StringSet - Annotation spec set resource name.
- sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Response - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow
Multi booleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation
Spec stringSet - Annotation spec set resource name.
- sentiment
Config GoogleCloud Datalabeling V1beta1Sentiment Config Response - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow_
multi_ boollabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation_
spec_ strset - Annotation spec set resource name.
- sentiment_
config GoogleCloud Datalabeling V1beta1Sentiment Config Response - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
- allow
Multi BooleanLabel - Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
- annotation
Spec StringSet - Annotation spec set resource name.
- sentiment
Config Property Map - Optional. Configs for sentiment selection. We deprecate sentiment analysis in data labeling side as it is incompatible with uCAIP.
GoogleCloudDatalabelingV1beta1TextMetadataResponse
- Language
Code string - The language of this text, as a BCP-47. Default value is en-US.
- Language
Code string - The language of this text, as a BCP-47. Default value is en-US.
- language
Code String - The language of this text, as a BCP-47. Default value is en-US.
- language
Code string - The language of this text, as a BCP-47. Default value is en-US.
- language_
code str - The language of this text, as a BCP-47. Default value is en-US.
- language
Code String - The language of this text, as a BCP-47. Default value is en-US.
GoogleRpcStatusResponse
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details
List<Immutable
Dictionary<string, string>> - A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- Code int
- The status code, which should be an enum value of google.rpc.Code.
- Details []map[string]string
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- Message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Integer
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String,String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code number
- The status code, which should be an enum value of google.rpc.Code.
- details {[key: string]: string}[]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message string
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code int
- The status code, which should be an enum value of google.rpc.Code.
- details Sequence[Mapping[str, str]]
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message str
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
- code Number
- The status code, which should be an enum value of google.rpc.Code.
- details List<Map<String>>
- A list of messages that carry the error details. There is a common set of message types for APIs to use.
- message String
- A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
Package Details
- Repository
- Google Cloud Native pulumi/pulumi-google-native
- License
- Apache-2.0
Google Cloud Native is in preview. Google Cloud Classic is fully supported.
Google Cloud Native v0.32.0 published on Wednesday, Nov 29, 2023 by Pulumi