Oracle Cloud Infrastructure v2.11.0 published on Thursday, Sep 19, 2024 by Pulumi
oci.AiLanguage.getModels
Explore with Pulumi AI
This data source provides the list of Models in Oracle Cloud Infrastructure Ai Language service.
Returns a list of models.
Example Usage
Coming soon!
Coming soon!
Coming soon!
Coming soon!
package generated_program;
import com.pulumi.Context;
import com.pulumi.Pulumi;
import com.pulumi.core.Output;
import com.pulumi.oci.AiLanguage.AiLanguageFunctions;
import com.pulumi.oci.AiLanguage.inputs.GetModelsArgs;
import java.util.List;
import java.util.ArrayList;
import java.util.Map;
import java.io.File;
import java.nio.file.Files;
import java.nio.file.Paths;
public class App {
public static void main(String[] args) {
Pulumi.run(App::stack);
}
public static void stack(Context ctx) {
final var testModels = AiLanguageFunctions.getModels(GetModelsArgs.builder()
.compartmentId(compartmentId)
.displayName(modelDisplayName)
.modelId(testModel.id())
.projectId(testProject.id())
.state(modelState)
.build());
}
}
variables:
testModels:
fn::invoke:
Function: oci:AiLanguage:getModels
Arguments:
compartmentId: ${compartmentId}
displayName: ${modelDisplayName}
modelId: ${testModel.id}
projectId: ${testProject.id}
state: ${modelState}
Using getModels
Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.
function getModels(args: GetModelsArgs, opts?: InvokeOptions): Promise<GetModelsResult>
function getModelsOutput(args: GetModelsOutputArgs, opts?: InvokeOptions): Output<GetModelsResult>
def get_models(compartment_id: Optional[str] = None,
display_name: Optional[str] = None,
filters: Optional[Sequence[_ailanguage.GetModelsFilter]] = None,
id: Optional[str] = None,
project_id: Optional[str] = None,
state: Optional[str] = None,
opts: Optional[InvokeOptions] = None) -> GetModelsResult
def get_models_output(compartment_id: Optional[pulumi.Input[str]] = None,
display_name: Optional[pulumi.Input[str]] = None,
filters: Optional[pulumi.Input[Sequence[pulumi.Input[_ailanguage.GetModelsFilterArgs]]]] = None,
id: Optional[pulumi.Input[str]] = None,
project_id: Optional[pulumi.Input[str]] = None,
state: Optional[pulumi.Input[str]] = None,
opts: Optional[InvokeOptions] = None) -> Output[GetModelsResult]
func GetModels(ctx *Context, args *GetModelsArgs, opts ...InvokeOption) (*GetModelsResult, error)
func GetModelsOutput(ctx *Context, args *GetModelsOutputArgs, opts ...InvokeOption) GetModelsResultOutput
> Note: This function is named GetModels
in the Go SDK.
public static class GetModels
{
public static Task<GetModelsResult> InvokeAsync(GetModelsArgs args, InvokeOptions? opts = null)
public static Output<GetModelsResult> Invoke(GetModelsInvokeArgs args, InvokeOptions? opts = null)
}
public static CompletableFuture<GetModelsResult> getModels(GetModelsArgs args, InvokeOptions options)
// Output-based functions aren't available in Java yet
fn::invoke:
function: oci:AiLanguage/getModels:getModels
arguments:
# arguments dictionary
The following arguments are supported:
- Compartment
Id string - The ID of the compartment in which to list resources.
- Display
Name string - A filter to return only resources that match the entire display name given.
- Filters
List<Get
Models Filter> - Id string
- Unique identifier model OCID of a model that is immutable on creation
- Project
Id string - The ID of the project for which to list the objects.
- State string
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
- Compartment
Id string - The ID of the compartment in which to list resources.
- Display
Name string - A filter to return only resources that match the entire display name given.
- Filters
[]Get
Models Filter - Id string
- Unique identifier model OCID of a model that is immutable on creation
- Project
Id string - The ID of the project for which to list the objects.
- State string
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
- compartment
Id String - The ID of the compartment in which to list resources.
- display
Name String - A filter to return only resources that match the entire display name given.
- filters
List<Get
Models Filter> - id String
- Unique identifier model OCID of a model that is immutable on creation
- project
Id String - The ID of the project for which to list the objects.
- state String
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
- compartment
Id string - The ID of the compartment in which to list resources.
- display
Name string - A filter to return only resources that match the entire display name given.
- filters
Get
Models Filter[] - id string
- Unique identifier model OCID of a model that is immutable on creation
- project
Id string - The ID of the project for which to list the objects.
- state string
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
- compartment_
id str - The ID of the compartment in which to list resources.
- display_
name str - A filter to return only resources that match the entire display name given.
- filters
Sequence[ailanguage.
Get Models Filter] - id str
- Unique identifier model OCID of a model that is immutable on creation
- project_
id str - The ID of the project for which to list the objects.
- state str
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
- compartment
Id String - The ID of the compartment in which to list resources.
- display
Name String - A filter to return only resources that match the entire display name given.
- filters List<Property Map>
- id String
- Unique identifier model OCID of a model that is immutable on creation
- project
Id String - The ID of the project for which to list the objects.
- state String
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
getModels Result
The following output properties are available:
- Compartment
Id string - The OCID for the model's compartment.
- Model
Collections List<GetModels Model Collection> - The list of model_collection.
- Display
Name string - A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- Filters
List<Get
Models Filter> - Id string
- Unique identifier model OCID of a model that is immutable on creation
- Project
Id string - The OCID of the project to associate with the model.
- State string
- The state of the model.
- Compartment
Id string - The OCID for the model's compartment.
- Model
Collections []GetModels Model Collection - The list of model_collection.
- Display
Name string - A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- Filters
[]Get
Models Filter - Id string
- Unique identifier model OCID of a model that is immutable on creation
- Project
Id string - The OCID of the project to associate with the model.
- State string
- The state of the model.
- compartment
Id String - The OCID for the model's compartment.
- model
Collections List<GetModels Model Collection> - The list of model_collection.
- display
Name String - A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- filters
List<Get
Models Filter> - id String
- Unique identifier model OCID of a model that is immutable on creation
- project
Id String - The OCID of the project to associate with the model.
- state String
- The state of the model.
- compartment
Id string - The OCID for the model's compartment.
- model
Collections GetModels Model Collection[] - The list of model_collection.
- display
Name string - A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- filters
Get
Models Filter[] - id string
- Unique identifier model OCID of a model that is immutable on creation
- project
Id string - The OCID of the project to associate with the model.
- state string
- The state of the model.
- compartment_
id str - The OCID for the model's compartment.
- model_
collections Sequence[ailanguage.Get Models Model Collection] - The list of model_collection.
- display_
name str - A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- filters
Sequence[ailanguage.
Get Models Filter] - id str
- Unique identifier model OCID of a model that is immutable on creation
- project_
id str - The OCID of the project to associate with the model.
- state str
- The state of the model.
- compartment
Id String - The OCID for the model's compartment.
- model
Collections List<Property Map> - The list of model_collection.
- display
Name String - A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
- filters List<Property Map>
- id String
- Unique identifier model OCID of a model that is immutable on creation
- project
Id String - The OCID of the project to associate with the model.
- state String
- The state of the model.
Supporting Types
GetModelsFilter
GetModelsModelCollection
GetModelsModelCollectionItem
- Compartment
Id string - The ID of the compartment in which to list resources.
- Dictionary<string, string>
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example:
{"foo-namespace.bar-key": "value"}
- Description string
- A short description of the Model.
- Display
Name string - A filter to return only resources that match the entire display name given.
- Evaluation
Results List<GetModels Model Collection Item Evaluation Result> - model training results of different models
- Dictionary<string, string>
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example:
{"bar-key": "value"}
- Id string
- Unique identifier model OCID of a model that is immutable on creation
- Lifecycle
Details string - A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- Model
Details List<GetModels Model Collection Item Model Detail> - Possible model types
- Project
Id string - The ID of the project for which to list the objects.
- State string
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
- Dictionary<string, string>
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example:
{"orcl-cloud.free-tier-retained": "true"}
- Test
Strategies List<GetModels Model Collection Item Test Strategy> - Possible strategy as testing and validation(optional) dataset.
- Time
Created string - The time the the model was created. An RFC3339 formatted datetime string.
- Time
Updated string - The time the model was updated. An RFC3339 formatted datetime string.
- Training
Datasets List<GetModels Model Collection Item Training Dataset> - Possible data set type
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- Compartment
Id string - The ID of the compartment in which to list resources.
- map[string]string
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example:
{"foo-namespace.bar-key": "value"}
- Description string
- A short description of the Model.
- Display
Name string - A filter to return only resources that match the entire display name given.
- Evaluation
Results []GetModels Model Collection Item Evaluation Result - model training results of different models
- map[string]string
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example:
{"bar-key": "value"}
- Id string
- Unique identifier model OCID of a model that is immutable on creation
- Lifecycle
Details string - A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- Model
Details []GetModels Model Collection Item Model Detail - Possible model types
- Project
Id string - The ID of the project for which to list the objects.
- State string
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
- map[string]string
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example:
{"orcl-cloud.free-tier-retained": "true"}
- Test
Strategies []GetModels Model Collection Item Test Strategy - Possible strategy as testing and validation(optional) dataset.
- Time
Created string - The time the the model was created. An RFC3339 formatted datetime string.
- Time
Updated string - The time the model was updated. An RFC3339 formatted datetime string.
- Training
Datasets []GetModels Model Collection Item Training Dataset - Possible data set type
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- compartment
Id String - The ID of the compartment in which to list resources.
- Map<String,String>
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example:
{"foo-namespace.bar-key": "value"}
- description String
- A short description of the Model.
- display
Name String - A filter to return only resources that match the entire display name given.
- evaluation
Results List<GetModels Model Collection Item Evaluation Result> - model training results of different models
- Map<String,String>
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example:
{"bar-key": "value"}
- id String
- Unique identifier model OCID of a model that is immutable on creation
- lifecycle
Details String - A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- model
Details List<GetModels Model Collection Item Model Detail> - Possible model types
- project
Id String - The ID of the project for which to list the objects.
- state String
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
- Map<String,String>
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example:
{"orcl-cloud.free-tier-retained": "true"}
- test
Strategies List<GetModels Model Collection Item Test Strategy> - Possible strategy as testing and validation(optional) dataset.
- time
Created String - The time the the model was created. An RFC3339 formatted datetime string.
- time
Updated String - The time the model was updated. An RFC3339 formatted datetime string.
- training
Datasets List<GetModels Model Collection Item Training Dataset> - Possible data set type
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- compartment
Id string - The ID of the compartment in which to list resources.
- {[key: string]: string}
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example:
{"foo-namespace.bar-key": "value"}
- description string
- A short description of the Model.
- display
Name string - A filter to return only resources that match the entire display name given.
- evaluation
Results GetModels Model Collection Item Evaluation Result[] - model training results of different models
- {[key: string]: string}
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example:
{"bar-key": "value"}
- id string
- Unique identifier model OCID of a model that is immutable on creation
- lifecycle
Details string - A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- model
Details GetModels Model Collection Item Model Detail[] - Possible model types
- project
Id string - The ID of the project for which to list the objects.
- state string
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
- {[key: string]: string}
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example:
{"orcl-cloud.free-tier-retained": "true"}
- test
Strategies GetModels Model Collection Item Test Strategy[] - Possible strategy as testing and validation(optional) dataset.
- time
Created string - The time the the model was created. An RFC3339 formatted datetime string.
- time
Updated string - The time the model was updated. An RFC3339 formatted datetime string.
- training
Datasets GetModels Model Collection Item Training Dataset[] - Possible data set type
- version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- compartment_
id str - The ID of the compartment in which to list resources.
- Mapping[str, str]
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example:
{"foo-namespace.bar-key": "value"}
- description str
- A short description of the Model.
- display_
name str - A filter to return only resources that match the entire display name given.
- evaluation_
results Sequence[ailanguage.Get Models Model Collection Item Evaluation Result] - model training results of different models
- Mapping[str, str]
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example:
{"bar-key": "value"}
- id str
- Unique identifier model OCID of a model that is immutable on creation
- lifecycle_
details str - A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- model_
details Sequence[ailanguage.Get Models Model Collection Item Model Detail] - Possible model types
- project_
id str - The ID of the project for which to list the objects.
- state str
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
- Mapping[str, str]
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example:
{"orcl-cloud.free-tier-retained": "true"}
- test_
strategies Sequence[ailanguage.Get Models Model Collection Item Test Strategy] - Possible strategy as testing and validation(optional) dataset.
- time_
created str - The time the the model was created. An RFC3339 formatted datetime string.
- time_
updated str - The time the model was updated. An RFC3339 formatted datetime string.
- training_
datasets Sequence[ailanguage.Get Models Model Collection Item Training Dataset] - Possible data set type
- version str
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- compartment
Id String - The ID of the compartment in which to list resources.
- Map<String>
- Defined tags for this resource. Each key is predefined and scoped to a namespace. Example:
{"foo-namespace.bar-key": "value"}
- description String
- A short description of the Model.
- display
Name String - A filter to return only resources that match the entire display name given.
- evaluation
Results List<Property Map> - model training results of different models
- Map<String>
- Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example:
{"bar-key": "value"}
- id String
- Unique identifier model OCID of a model that is immutable on creation
- lifecycle
Details String - A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
- model
Details List<Property Map> - Possible model types
- project
Id String - The ID of the project for which to list the objects.
- state String
- Filter results by the specified lifecycle state. Must be a valid state for the resource type.
- Map<String>
- Usage of system tag keys. These predefined keys are scoped to namespaces. Example:
{"orcl-cloud.free-tier-retained": "true"}
- test
Strategies List<Property Map> - Possible strategy as testing and validation(optional) dataset.
- time
Created String - The time the the model was created. An RFC3339 formatted datetime string.
- time
Updated String - The time the model was updated. An RFC3339 formatted datetime string.
- training
Datasets List<Property Map> - Possible data set type
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
GetModelsModelCollectionItemEvaluationResult
- Class
Metrics List<GetModels Model Collection Item Evaluation Result Class Metric> - List of text classification metrics
- Confusion
Matrix string - class level confusion matrix
- Entity
Metrics List<GetModels Model Collection Item Evaluation Result Entity Metric> - List of entity metrics
- Labels List<string>
- labels
- Metrics
List<Get
Models Model Collection Item Evaluation Result Metric> - Model level named entity recognition metrics
- Model
Type string - Model type
- Class
Metrics []GetModels Model Collection Item Evaluation Result Class Metric - List of text classification metrics
- Confusion
Matrix string - class level confusion matrix
- Entity
Metrics []GetModels Model Collection Item Evaluation Result Entity Metric - List of entity metrics
- Labels []string
- labels
- Metrics
[]Get
Models Model Collection Item Evaluation Result Metric - Model level named entity recognition metrics
- Model
Type string - Model type
- class
Metrics List<GetModels Model Collection Item Evaluation Result Class Metric> - List of text classification metrics
- confusion
Matrix String - class level confusion matrix
- entity
Metrics List<GetModels Model Collection Item Evaluation Result Entity Metric> - List of entity metrics
- labels List<String>
- labels
- metrics
List<Get
Models Model Collection Item Evaluation Result Metric> - Model level named entity recognition metrics
- model
Type String - Model type
- class
Metrics GetModels Model Collection Item Evaluation Result Class Metric[] - List of text classification metrics
- confusion
Matrix string - class level confusion matrix
- entity
Metrics GetModels Model Collection Item Evaluation Result Entity Metric[] - List of entity metrics
- labels string[]
- labels
- metrics
Get
Models Model Collection Item Evaluation Result Metric[] - Model level named entity recognition metrics
- model
Type string - Model type
- class_
metrics Sequence[ailanguage.Get Models Model Collection Item Evaluation Result Class Metric] - List of text classification metrics
- confusion_
matrix str - class level confusion matrix
- entity_
metrics Sequence[ailanguage.Get Models Model Collection Item Evaluation Result Entity Metric] - List of entity metrics
- labels Sequence[str]
- labels
- metrics
Sequence[ailanguage.
Get Models Model Collection Item Evaluation Result Metric] - Model level named entity recognition metrics
- model_
type str - Model type
- class
Metrics List<Property Map> - List of text classification metrics
- confusion
Matrix String - class level confusion matrix
- entity
Metrics List<Property Map> - List of entity metrics
- labels List<String>
- labels
- metrics List<Property Map>
- Model level named entity recognition metrics
- model
Type String - Model type
GetModelsModelCollectionItemEvaluationResultClassMetric
- F1 double
- F1-score, is a measure of a model’s accuracy on a dataset
- Label string
- Entity label
- Precision double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Recall double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Support double
- number of samples in the test set
- F1 float64
- F1-score, is a measure of a model’s accuracy on a dataset
- Label string
- Entity label
- Precision float64
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Recall float64
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Support float64
- number of samples in the test set
- f1 Double
- F1-score, is a measure of a model’s accuracy on a dataset
- label String
- Entity label
- precision Double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall Double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- support Double
- number of samples in the test set
- f1 number
- F1-score, is a measure of a model’s accuracy on a dataset
- label string
- Entity label
- precision number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- support number
- number of samples in the test set
- f1 float
- F1-score, is a measure of a model’s accuracy on a dataset
- label str
- Entity label
- precision float
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall float
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- support float
- number of samples in the test set
- f1 Number
- F1-score, is a measure of a model’s accuracy on a dataset
- label String
- Entity label
- precision Number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall Number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- support Number
- number of samples in the test set
GetModelsModelCollectionItemEvaluationResultEntityMetric
- F1 double
- F1-score, is a measure of a model’s accuracy on a dataset
- Label string
- Entity label
- Precision double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Recall double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- F1 float64
- F1-score, is a measure of a model’s accuracy on a dataset
- Label string
- Entity label
- Precision float64
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Recall float64
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- f1 Double
- F1-score, is a measure of a model’s accuracy on a dataset
- label String
- Entity label
- precision Double
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall Double
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- f1 number
- F1-score, is a measure of a model’s accuracy on a dataset
- label string
- Entity label
- precision number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- f1 float
- F1-score, is a measure of a model’s accuracy on a dataset
- label str
- Entity label
- precision float
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall float
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- f1 Number
- F1-score, is a measure of a model’s accuracy on a dataset
- label String
- Entity label
- precision Number
- Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- recall Number
- Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
GetModelsModelCollectionItemEvaluationResultMetric
- Accuracy double
- The fraction of the labels that were correctly recognised .
- Macro
F1 double - F1-score, is a measure of a model’s accuracy on a dataset
- Macro
Precision double - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Macro
Recall double - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Micro
F1 double - F1-score, is a measure of a model’s accuracy on a dataset
- Micro
Precision double - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Micro
Recall double - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Weighted
F1 double - F1-score, is a measure of a model’s accuracy on a dataset
- Weighted
Precision double - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Weighted
Recall double - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Accuracy float64
- The fraction of the labels that were correctly recognised .
- Macro
F1 float64 - F1-score, is a measure of a model’s accuracy on a dataset
- Macro
Precision float64 - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Macro
Recall float64 - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Micro
F1 float64 - F1-score, is a measure of a model’s accuracy on a dataset
- Micro
Precision float64 - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Micro
Recall float64 - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- Weighted
F1 float64 - F1-score, is a measure of a model’s accuracy on a dataset
- Weighted
Precision float64 - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- Weighted
Recall float64 - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- accuracy Double
- The fraction of the labels that were correctly recognised .
- macro
F1 Double - F1-score, is a measure of a model’s accuracy on a dataset
- macro
Precision Double - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- macro
Recall Double - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- micro
F1 Double - F1-score, is a measure of a model’s accuracy on a dataset
- micro
Precision Double - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- micro
Recall Double - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- weighted
F1 Double - F1-score, is a measure of a model’s accuracy on a dataset
- weighted
Precision Double - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- weighted
Recall Double - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- accuracy number
- The fraction of the labels that were correctly recognised .
- macro
F1 number - F1-score, is a measure of a model’s accuracy on a dataset
- macro
Precision number - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- macro
Recall number - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- micro
F1 number - F1-score, is a measure of a model’s accuracy on a dataset
- micro
Precision number - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- micro
Recall number - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- weighted
F1 number - F1-score, is a measure of a model’s accuracy on a dataset
- weighted
Precision number - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- weighted
Recall number - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- accuracy float
- The fraction of the labels that were correctly recognised .
- macro_
f1 float - F1-score, is a measure of a model’s accuracy on a dataset
- macro_
precision float - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- macro_
recall float - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- micro_
f1 float - F1-score, is a measure of a model’s accuracy on a dataset
- micro_
precision float - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- micro_
recall float - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- weighted_
f1 float - F1-score, is a measure of a model’s accuracy on a dataset
- weighted_
precision float - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- weighted_
recall float - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- accuracy Number
- The fraction of the labels that were correctly recognised .
- macro
F1 Number - F1-score, is a measure of a model’s accuracy on a dataset
- macro
Precision Number - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- macro
Recall Number - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- micro
F1 Number - F1-score, is a measure of a model’s accuracy on a dataset
- micro
Precision Number - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- micro
Recall Number - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
- weighted
F1 Number - F1-score, is a measure of a model’s accuracy on a dataset
- weighted
Precision Number - Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
- weighted
Recall Number - Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
GetModelsModelCollectionItemModelDetail
- Classification
Modes List<GetModels Model Collection Item Model Detail Classification Mode> - classification Modes
- Language
Code string - supported language default value is en
- Model
Type string - Model type
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- Classification
Modes []GetModels Model Collection Item Model Detail Classification Mode - classification Modes
- Language
Code string - supported language default value is en
- Model
Type string - Model type
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classification
Modes List<GetModels Model Collection Item Model Detail Classification Mode> - classification Modes
- language
Code String - supported language default value is en
- model
Type String - Model type
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classification
Modes GetModels Model Collection Item Model Detail Classification Mode[] - classification Modes
- language
Code string - supported language default value is en
- model
Type string - Model type
- version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classification_
modes Sequence[ailanguage.Get Models Model Collection Item Model Detail Classification Mode] - classification Modes
- language_
code str - supported language default value is en
- model_
type str - Model type
- version str
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classification
Modes List<Property Map> - classification Modes
- language
Code String - supported language default value is en
- model
Type String - Model type
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
GetModelsModelCollectionItemModelDetailClassificationMode
- Classification
Mode string - classification Modes
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- Classification
Mode string - classification Modes
- Version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classification
Mode String - classification Modes
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classification
Mode string - classification Modes
- version string
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classification_
mode str - classification Modes
- version str
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
- classification
Mode String - classification Modes
- version String
- For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
GetModelsModelCollectionItemTestStrategy
- Strategy
Type string - This information will define the test strategy different datasets for test and validation(optional) dataset.
- Testing
Datasets List<GetModels Model Collection Item Test Strategy Testing Dataset> - Possible data set type
- Validation
Datasets List<GetModels Model Collection Item Test Strategy Validation Dataset> - Possible data set type
- Strategy
Type string - This information will define the test strategy different datasets for test and validation(optional) dataset.
- Testing
Datasets []GetModels Model Collection Item Test Strategy Testing Dataset - Possible data set type
- Validation
Datasets []GetModels Model Collection Item Test Strategy Validation Dataset - Possible data set type
- strategy
Type String - This information will define the test strategy different datasets for test and validation(optional) dataset.
- testing
Datasets List<GetModels Model Collection Item Test Strategy Testing Dataset> - Possible data set type
- validation
Datasets List<GetModels Model Collection Item Test Strategy Validation Dataset> - Possible data set type
- strategy
Type string - This information will define the test strategy different datasets for test and validation(optional) dataset.
- testing
Datasets GetModels Model Collection Item Test Strategy Testing Dataset[] - Possible data set type
- validation
Datasets GetModels Model Collection Item Test Strategy Validation Dataset[] - Possible data set type
- strategy_
type str - This information will define the test strategy different datasets for test and validation(optional) dataset.
- testing_
datasets Sequence[ailanguage.Get Models Model Collection Item Test Strategy Testing Dataset] - Possible data set type
- validation_
datasets Sequence[ailanguage.Get Models Model Collection Item Test Strategy Validation Dataset] - Possible data set type
- strategy
Type String - This information will define the test strategy different datasets for test and validation(optional) dataset.
- testing
Datasets List<Property Map> - Possible data set type
- validation
Datasets List<Property Map> - Possible data set type
GetModelsModelCollectionItemTestStrategyTestingDataset
- Dataset
Id string - Data Science Labelling Service OCID
- Dataset
Type string - Possible data sets
- Location
Details List<GetModels Model Collection Item Test Strategy Testing Dataset Location Detail> - Possible object storage location types
- Dataset
Id string - Data Science Labelling Service OCID
- Dataset
Type string - Possible data sets
- Location
Details []GetModels Model Collection Item Test Strategy Testing Dataset Location Detail - Possible object storage location types
- dataset
Id String - Data Science Labelling Service OCID
- dataset
Type String - Possible data sets
- location
Details List<GetModels Model Collection Item Test Strategy Testing Dataset Location Detail> - Possible object storage location types
- dataset
Id string - Data Science Labelling Service OCID
- dataset
Type string - Possible data sets
- location
Details GetModels Model Collection Item Test Strategy Testing Dataset Location Detail[] - Possible object storage location types
- dataset_
id str - Data Science Labelling Service OCID
- dataset_
type str - Possible data sets
- location_
details Sequence[ailanguage.Get Models Model Collection Item Test Strategy Testing Dataset Location Detail] - Possible object storage location types
- dataset
Id String - Data Science Labelling Service OCID
- dataset
Type String - Possible data sets
- location
Details List<Property Map> - Possible object storage location types
GetModelsModelCollectionItemTestStrategyTestingDatasetLocationDetail
- Bucket string
- Object storage bucket name
- Location
Type string - Possible object storage location types
- Namespace string
- Object storage namespace
- Object
Names List<string> - Array of files which need to be processed in the bucket
- Bucket string
- Object storage bucket name
- Location
Type string - Possible object storage location types
- Namespace string
- Object storage namespace
- Object
Names []string - Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- location
Type String - Possible object storage location types
- namespace String
- Object storage namespace
- object
Names List<String> - Array of files which need to be processed in the bucket
- bucket string
- Object storage bucket name
- location
Type string - Possible object storage location types
- namespace string
- Object storage namespace
- object
Names string[] - Array of files which need to be processed in the bucket
- bucket str
- Object storage bucket name
- location_
type str - Possible object storage location types
- namespace str
- Object storage namespace
- object_
names Sequence[str] - Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- location
Type String - Possible object storage location types
- namespace String
- Object storage namespace
- object
Names List<String> - Array of files which need to be processed in the bucket
GetModelsModelCollectionItemTestStrategyValidationDataset
- Dataset
Id string - Data Science Labelling Service OCID
- Dataset
Type string - Possible data sets
- Location
Details List<GetModels Model Collection Item Test Strategy Validation Dataset Location Detail> - Possible object storage location types
- Dataset
Id string - Data Science Labelling Service OCID
- Dataset
Type string - Possible data sets
- Location
Details []GetModels Model Collection Item Test Strategy Validation Dataset Location Detail - Possible object storage location types
- dataset
Id String - Data Science Labelling Service OCID
- dataset
Type String - Possible data sets
- location
Details List<GetModels Model Collection Item Test Strategy Validation Dataset Location Detail> - Possible object storage location types
- dataset
Id string - Data Science Labelling Service OCID
- dataset
Type string - Possible data sets
- location
Details GetModels Model Collection Item Test Strategy Validation Dataset Location Detail[] - Possible object storage location types
- dataset_
id str - Data Science Labelling Service OCID
- dataset_
type str - Possible data sets
- location_
details Sequence[ailanguage.Get Models Model Collection Item Test Strategy Validation Dataset Location Detail] - Possible object storage location types
- dataset
Id String - Data Science Labelling Service OCID
- dataset
Type String - Possible data sets
- location
Details List<Property Map> - Possible object storage location types
GetModelsModelCollectionItemTestStrategyValidationDatasetLocationDetail
- Bucket string
- Object storage bucket name
- Location
Type string - Possible object storage location types
- Namespace string
- Object storage namespace
- Object
Names List<string> - Array of files which need to be processed in the bucket
- Bucket string
- Object storage bucket name
- Location
Type string - Possible object storage location types
- Namespace string
- Object storage namespace
- Object
Names []string - Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- location
Type String - Possible object storage location types
- namespace String
- Object storage namespace
- object
Names List<String> - Array of files which need to be processed in the bucket
- bucket string
- Object storage bucket name
- location
Type string - Possible object storage location types
- namespace string
- Object storage namespace
- object
Names string[] - Array of files which need to be processed in the bucket
- bucket str
- Object storage bucket name
- location_
type str - Possible object storage location types
- namespace str
- Object storage namespace
- object_
names Sequence[str] - Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- location
Type String - Possible object storage location types
- namespace String
- Object storage namespace
- object
Names List<String> - Array of files which need to be processed in the bucket
GetModelsModelCollectionItemTrainingDataset
- Dataset
Id string - Data Science Labelling Service OCID
- Dataset
Type string - Possible data sets
- Location
Details List<GetModels Model Collection Item Training Dataset Location Detail> - Possible object storage location types
- Dataset
Id string - Data Science Labelling Service OCID
- Dataset
Type string - Possible data sets
- Location
Details []GetModels Model Collection Item Training Dataset Location Detail - Possible object storage location types
- dataset
Id String - Data Science Labelling Service OCID
- dataset
Type String - Possible data sets
- location
Details List<GetModels Model Collection Item Training Dataset Location Detail> - Possible object storage location types
- dataset
Id string - Data Science Labelling Service OCID
- dataset
Type string - Possible data sets
- location
Details GetModels Model Collection Item Training Dataset Location Detail[] - Possible object storage location types
- dataset_
id str - Data Science Labelling Service OCID
- dataset_
type str - Possible data sets
- location_
details Sequence[ailanguage.Get Models Model Collection Item Training Dataset Location Detail] - Possible object storage location types
- dataset
Id String - Data Science Labelling Service OCID
- dataset
Type String - Possible data sets
- location
Details List<Property Map> - Possible object storage location types
GetModelsModelCollectionItemTrainingDatasetLocationDetail
- Bucket string
- Object storage bucket name
- Location
Type string - Possible object storage location types
- Namespace string
- Object storage namespace
- Object
Names List<string> - Array of files which need to be processed in the bucket
- Bucket string
- Object storage bucket name
- Location
Type string - Possible object storage location types
- Namespace string
- Object storage namespace
- Object
Names []string - Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- location
Type String - Possible object storage location types
- namespace String
- Object storage namespace
- object
Names List<String> - Array of files which need to be processed in the bucket
- bucket string
- Object storage bucket name
- location
Type string - Possible object storage location types
- namespace string
- Object storage namespace
- object
Names string[] - Array of files which need to be processed in the bucket
- bucket str
- Object storage bucket name
- location_
type str - Possible object storage location types
- namespace str
- Object storage namespace
- object_
names Sequence[str] - Array of files which need to be processed in the bucket
- bucket String
- Object storage bucket name
- location
Type String - Possible object storage location types
- namespace String
- Object storage namespace
- object
Names List<String> - Array of files which need to be processed in the bucket
Package Details
- Repository
- oci pulumi/pulumi-oci
- License
- Apache-2.0
- Notes
- This Pulumi package is based on the
oci
Terraform Provider.