Get desktop application:
View/edit binary Protocol Buffers messages
AutoML Server API. The resource names are assigned by the server. The server never reuses names that it has created after the resources with those names are deleted. An ID of a resource is the last element of the item's resource name. For `projects/{project_id}/locations/{location_id}/datasets/{dataset_id}`, then the id for the item is `{dataset_id}`. Currently the only supported `location_id` is "us-central1". On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.
Creates a dataset.
Request message for [AutoMl.CreateDataset][google.cloud.automl.v1.AutoMl.CreateDataset].
The resource name of the project to create the dataset for.
The dataset to create.
Gets a dataset.
Request message for [AutoMl.GetDataset][google.cloud.automl.v1.AutoMl.GetDataset].
The resource name of the dataset to retrieve.
Lists datasets in a project.
Request message for [AutoMl.ListDatasets][google.cloud.automl.v1.AutoMl.ListDatasets].
The resource name of the project from which to list datasets.
An expression for filtering the results of the request. * `dataset_metadata` - for existence of the case (e.g. image_classification_dataset_metadata:*). Some examples of using the filter are: * `translation_dataset_metadata:*` --> The dataset has translation_dataset_metadata.
Requested page size. Server may return fewer results than requested. If unspecified, server will pick a default size.
A token identifying a page of results for the server to return Typically obtained via [ListDatasetsResponse.next_page_token][google.cloud.automl.v1.ListDatasetsResponse.next_page_token] of the previous [AutoMl.ListDatasets][google.cloud.automl.v1.AutoMl.ListDatasets] call.
Response message for [AutoMl.ListDatasets][google.cloud.automl.v1.AutoMl.ListDatasets].
The datasets read.
A token to retrieve next page of results. Pass to [ListDatasetsRequest.page_token][google.cloud.automl.v1.ListDatasetsRequest.page_token] to obtain that page.
Updates a dataset.
Request message for [AutoMl.UpdateDataset][google.cloud.automl.v1.AutoMl.UpdateDataset]
The dataset which replaces the resource on the server.
Required. The update mask applies to the resource.
Deletes a dataset and all of its contents. Returns empty response in the [response][google.longrunning.Operation.response] field when it completes, and `delete_details` in the [metadata][google.longrunning.Operation.metadata] field.
Request message for [AutoMl.DeleteDataset][google.cloud.automl.v1.AutoMl.DeleteDataset].
The resource name of the dataset to delete.
Imports data into a dataset.
Request message for [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData].
Required. Dataset name. Dataset must already exist. All imported annotations and examples will be added.
Required. The desired input location and its domain specific semantics, if any.
Exports dataset's data to the provided output location. Returns an empty response in the [response][google.longrunning.Operation.response] field when it completes.
Request message for [AutoMl.ExportData][google.cloud.automl.v1.AutoMl.ExportData].
Required. The resource name of the dataset.
Required. The desired output location.
Gets an annotation spec.
Request message for [AutoMl.GetAnnotationSpec][google.cloud.automl.v1.AutoMl.GetAnnotationSpec].
The resource name of the annotation spec to retrieve.
A definition of an annotation spec.
Output only. Resource name of the annotation spec. Form: 'projects/{project_id}/locations/{location_id}/datasets/{dataset_id}/annotationSpecs/{annotation_spec_id}'
Required. The name of the annotation spec to show in the interface. The name can be up to 32 characters long and must match the regexp `[a-zA-Z0-9_]+`. (_), and ASCII digits 0-9.
Output only. The number of examples in the parent dataset labeled by the annotation spec.
Creates a model. Returns a Model in the [response][google.longrunning.Operation.response] field when it completes. When you create a model, several model evaluations are created for it: a global evaluation, and one evaluation for each annotation spec.
Request message for [AutoMl.CreateModel][google.cloud.automl.v1.AutoMl.CreateModel].
Resource name of the parent project where the model is being created.
The model to create.
Gets a model.
Request message for [AutoMl.GetModel][google.cloud.automl.v1.AutoMl.GetModel].
Resource name of the model.
Lists models.
Request message for [AutoMl.ListModels][google.cloud.automl.v1.AutoMl.ListModels].
Resource name of the project, from which to list the models.
An expression for filtering the results of the request. * `model_metadata` - for existence of the case (e.g. image_classification_model_metadata:*). * `dataset_id` - for = or !=. Some examples of using the filter are: * `image_classification_model_metadata:*` --> The model has image_classification_model_metadata. * `dataset_id=5` --> The model was created from a dataset with ID 5.
Requested page size.
A token identifying a page of results for the server to return Typically obtained via [ListModelsResponse.next_page_token][google.cloud.automl.v1.ListModelsResponse.next_page_token] of the previous [AutoMl.ListModels][google.cloud.automl.v1.AutoMl.ListModels] call.
Response message for [AutoMl.ListModels][google.cloud.automl.v1.AutoMl.ListModels].
List of models in the requested page.
A token to retrieve next page of results. Pass to [ListModelsRequest.page_token][google.cloud.automl.v1.ListModelsRequest.page_token] to obtain that page.
Deletes a model. Returns `google.protobuf.Empty` in the [response][google.longrunning.Operation.response] field when it completes, and `delete_details` in the [metadata][google.longrunning.Operation.metadata] field.
Request message for [AutoMl.DeleteModel][google.cloud.automl.v1.AutoMl.DeleteModel].
Resource name of the model being deleted.
Updates a model.
Request message for [AutoMl.UpdateModel][google.cloud.automl.v1.AutoMl.UpdateModel]
The model which replaces the resource on the server.
Required. The update mask applies to the resource.
Deploys a model. If a model is already deployed, deploying it with the same parameters has no effect. Deploying with different parametrs (as e.g. changing [node_number][google.cloud.automl.v1p1beta.ImageObjectDetectionModelDeploymentMetadata.node_number]) will reset the deployment state without pausing the model's availability. Only applicable for Text Classification, Image Object Detection; all other domains manage deployment automatically. Returns an empty response in the [response][google.longrunning.Operation.response] field when it completes.
Request message for [AutoMl.DeployModel][google.cloud.automl.v1.AutoMl.DeployModel].
The per-domain specific deployment parameters.
Model deployment metadata specific to Image Object Detection.
Model deployment metadata specific to Image Classification.
Resource name of the model to deploy.
Undeploys a model. If the model is not deployed this method has no effect. Only applicable for Text Classification, Image Object Detection; all other domains manage deployment automatically. Returns an empty response in the [response][google.longrunning.Operation.response] field when it completes.
Request message for [AutoMl.UndeployModel][google.cloud.automl.v1.AutoMl.UndeployModel].
Resource name of the model to undeploy.
Exports a trained, "export-able", model to a user specified Google Cloud Storage location. A model is considered export-able if and only if it has an export format defined for it in [ModelExportOutputConfig][google.cloud.automl.v1.ModelExportOutputConfig]. Returns an empty response in the [response][google.longrunning.Operation.response] field when it completes.
Request message for [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]. Models need to be enabled for exporting, otherwise an error code will be returned.
Required. The resource name of the model to export.
Required. The desired output location and configuration.
Gets a model evaluation.
Request message for [AutoMl.GetModelEvaluation][google.cloud.automl.v1.AutoMl.GetModelEvaluation].
Resource name for the model evaluation.
Lists model evaluations.
Request message for [AutoMl.ListModelEvaluations][google.cloud.automl.v1.AutoMl.ListModelEvaluations].
Resource name of the model to list the model evaluations for. If modelId is set as "-", this will list model evaluations from across all models of the parent location.
An expression for filtering the results of the request. * `annotation_spec_id` - for =, != or existence. See example below for the last. Some examples of using the filter are: * `annotation_spec_id!=4` --> The model evaluation was done for annotation spec with ID different than 4. * `NOT annotation_spec_id:*` --> The model evaluation was done for aggregate of all annotation specs.
Requested page size.
A token identifying a page of results for the server to return. Typically obtained via [ListModelEvaluationsResponse.next_page_token][google.cloud.automl.v1.ListModelEvaluationsResponse.next_page_token] of the previous [AutoMl.ListModelEvaluations][google.cloud.automl.v1.AutoMl.ListModelEvaluations] call.
Response message for [AutoMl.ListModelEvaluations][google.cloud.automl.v1.AutoMl.ListModelEvaluations].
List of model evaluations in the requested page.
A token to retrieve next page of results. Pass to the [ListModelEvaluationsRequest.page_token][google.cloud.automl.v1.ListModelEvaluationsRequest.page_token] field of a new [AutoMl.ListModelEvaluations][google.cloud.automl.v1.AutoMl.ListModelEvaluations] request to obtain that page.
AutoML Prediction API. On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.
Perform an online prediction. The prediction result will be directly returned in the response. Available for following ML problems, and their expected request payloads: * Image Classification - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB. * Image Object Detection - Image in .JPEG, .GIF or .PNG format, image_bytes up to 30MB. * Text Classification - TextSnippet, content up to 60,000 characters, UTF-8 encoded. * Text Extraction - TextSnippet, content up to 30,000 characters, UTF-8 NFC encoded. * Translation - TextSnippet, content up to 25,000 characters, UTF-8 encoded. * Text Sentiment - TextSnippet, content up 500 characters, UTF-8 encoded.
Request message for [PredictionService.Predict][google.cloud.automl.v1.PredictionService.Predict].
Name of the model requested to serve the prediction.
Required. Payload to perform a prediction on. The payload must match the problem type that the model was trained to solve.
Additional domain-specific parameters, any string must be up to 25000 characters long. * For Image Classification: `score_threshold` - (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5. * For Image Object Detection: `score_threshold` - (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5. `max_bounding_box_count` - (int64) No more than this number of bounding boxes will be returned in the response. Default is 100, the requested value may be limited by server.
Response message for [PredictionService.Predict][google.cloud.automl.v1.PredictionService.Predict].
Prediction result. Translation and Text Sentiment will return precisely one payload.
The preprocessed example that AutoML actually makes prediction on. Empty if AutoML does not preprocess the input example. * For Text Extraction: If the input is a .pdf file, the OCR'ed text will be provided in [document_text][google.cloud.automl.v1p1beta.Document.document_text]. * For Text Classification: If the input is a .pdf file, the OCR'ed trucated text will be provided in [document_text][google.cloud.automl.v1p1beta.Document.document_text]. * For Text Sentiment: If the input is a .pdf file, the OCR'ed trucated text will be provided in [document_text][google.cloud.automl.v1p1beta.Document.document_text].
Additional domain-specific prediction response metadata. * For Image Object Detection: `max_bounding_box_count` - (int64) At most that many bounding boxes per image could have been returned. * For Text Sentiment: `sentiment_score` - (float, deprecated) A value between -1 and 1, -1 maps to least positive sentiment, while 1 maps to the most positive one and the higher the score, the more positive the sentiment in the document is. Yet these values are relative to the training data, so e.g. if all data was positive then -1 will be also positive (though the least). The sentiment_score shouldn't be confused with "score" or "magnitude" from the previous Natural Language Sentiment Analysis API.
Perform a batch prediction. Unlike the online [Predict][google.cloud.automl.v1.PredictionService.Predict], batch prediction result won't be immediately available in the response. Instead, a long running operation object is returned. User can poll the operation result via [GetOperation][google.longrunning.Operations.GetOperation] method. Once the operation is done, [BatchPredictResult][google.cloud.automl.v1.BatchPredictResult] is returned in the [response][google.longrunning.Operation.response] field. Available for following ML problems: * Image Classification * Image Object Detection * Text Extraction
Request message for [PredictionService.BatchPredict][google.cloud.automl.v1.PredictionService.BatchPredict].
Name of the model requested to serve the batch prediction.
Required. The input configuration for batch prediction.
Required. The Configuration specifying where output predictions should be written.
Additional domain-specific parameters for the predictions, any string must be up to 25000 characters long. * For Text Classification: `score_threshold` - (float) A value from 0.0 to 1.0. When the model makes predictions for a text snippet, it will only produce results that have at least this confidence score. The default is 0.5. * For Image Classification: `score_threshold` - (float) A value from 0.0 to 1.0. When the model makes predictions for an image, it will only produce results that have at least this confidence score. The default is 0.5. * For Image Object Detection: `score_threshold` - (float) When Model detects objects on the image, it will only produce bounding boxes which have at least this confidence score. Value in 0 to 1 range, default is 0.5. `max_bounding_box_count` - (int64) No more than this number of bounding boxes will be produced per image. Default is 100, the requested value may be limited by server.
Contains annotation information that is relevant to AutoML.
Used in:
Output only . Additional information about the annotation specific to the AutoML domain.
Annotation details for translation.
Annotation details for content or image classification.
Annotation details for image object detection.
Annotation details for text extraction.
Annotation details for text sentiment.
Output only . The resource ID of the annotation spec that this annotation pertains to. The annotation spec comes from either an ancestor dataset, or the dataset that was used to train the model in use.
Output only. The value of [display_name][google.cloud.automl.v1p1beta.AnnotationSpec.display_name] when the model was trained. Because this field returns a value at model training time, for different models trained using the same dataset, the returned value could be different as model owner could update the `display_name` between any two model training.
Input configuration for BatchPredict Action. The format of input depends on the ML problem of the model used for prediction. As input source the [gcs_source][google.cloud.automl.v1.InputConfig.gcs_source] is expected, unless specified otherwise. The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are: <h4>AutoML Natural Language</h4> <div class="ds-selector-tabs"><section><h5>Classification</h5> One or more CSV files where each line is a single column: GCS_FILE_PATH `GCS_FILE_PATH` is the Google Cloud Storage location of a text file. Supported file extensions: .TXT, .PDF Text files can be no larger than 10MB in size. Sample rows: gs://folder/text1.txt gs://folder/text2.pdf </section><section><h5>Sentiment Analysis</h5> One or more CSV files where each line is a single column: GCS_FILE_PATH `GCS_FILE_PATH` is the Google Cloud Storage location of a text file. Supported file extensions: .TXT, .PDF Text files can be no larger than 128kB in size. Sample rows: gs://folder/text1.txt gs://folder/text2.pdf </section><section><h5>Entity Extraction</h5> One or more JSONL (JSON Lines) files that either provide inline text or documents. You can only use one format, either inline text or documents, for a single call to [AutoMl.BatchPredict]. Each JSONL file contains a per line a proto that wraps a temporary user-assigned TextSnippet ID (string up to 2000 characters long) called "id", a TextSnippet proto (in JSON representation) and zero or more TextFeature protos. Any given text snippet content must have 30,000 characters or less, and also be UTF-8 NFC encoded (ASCII already is). The IDs provided should be unique. Each document JSONL file contains, per line, a proto that wraps a Document proto with `input_config` set. Only PDF documents are currently supported, and each PDF document cannot exceed 2MB in size. Each JSONL file must not exceed 100MB in size, and no more than 20 JSONL files may be passed. Sample inline JSONL file (Shown with artificial line breaks. Actual line breaks are denoted by "\n".): { "id": "my_first_id", "text_snippet": { "content": "dog car cat"}, "text_features": [ { "text_segment": {"start_offset": 4, "end_offset": 6}, "structural_type": PARAGRAPH, "bounding_poly": { "normalized_vertices": [ {"x": 0.1, "y": 0.1}, {"x": 0.1, "y": 0.3}, {"x": 0.3, "y": 0.3}, {"x": 0.3, "y": 0.1}, ] }, } ], }\n { "id": "2", "text_snippet": { "content": "Extended sample content", "mime_type": "text/plain" } } Sample document JSONL file (Shown with artificial line breaks. Actual line breaks are denoted by "\n".): { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ] } } } }\n { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document2.pdf" ] } } } } </section> </div> **Input field definitions:** `GCS_FILE_PATH` : The path to a file on Google Cloud Storage. For example, "gs://folder/video.avi". **Errors:** If any of the provided CSV files can't be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and prediction does not happen. Regardless of overall success or failure the per-row failures, up to a certain count cap, will be listed in Operation.metadata.partial_failures.
Used in:
,The source of the input.
Required. The Google Cloud Storage location for the input content.
Details of BatchPredict operation.
Used in:
Output only. The input config that was given upon starting this batch predict operation.
Output only. Information further describing this batch predict's output.
Further describes this batch predict's output. Supplements [BatchPredictOutputConfig][google.cloud.automl.v1.BatchPredictOutputConfig].
Used in:
The output location into which prediction output is written.
The full path of the Google Cloud Storage directory created, into which the prediction output is written.
Output configuration for BatchPredict Action. As destination the [gcs_destination][google.cloud.automl.v1.BatchPredictOutputConfig.gcs_destination] must be set unless specified otherwise for a domain. If gcs_destination is set then in the given directory a new directory is created. Its name will be "prediction-<model-display-name>-<timestamp-of-prediction-call>", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. The contents of it depends on the ML problem the predictions are made for. * For Text Classification: In the created directory files `text_classification_1.jsonl`, `text_classification_2.jsonl`,...,`text_classification_N.jsonl` will be created, where N may be 1, and depends on the total number of inputs and annotations found. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps input text (or pdf) file in the text snippet (or document) proto and a list of zero or more AnnotationPayload protos (called annotations), which have classification detail populated. A single text (or pdf) file will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text (or pdf) file failed (partially or completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,..., `errors_N.jsonl` files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps input text (or pdf) file followed by exactly one [`google.rpc.Status`](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only `code` and `message`. * For Text Sentiment: In the created directory files `text_sentiment_1.jsonl`, `text_sentiment_2.jsonl`,...,`text_sentiment_N.jsonl` will be created, where N may be 1, and depends on the total number of inputs and annotations found. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps input text (or pdf) file in the text snippet (or document) proto and a list of zero or more AnnotationPayload protos (called annotations), which have text_sentiment detail populated. A single text (or pdf) file will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text (or pdf) file failed (partially or completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,..., `errors_N.jsonl` files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps input text (or pdf) file followed by exactly one [`google.rpc.Status`](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only `code` and `message`. * For Text Extraction: In the created directory files `text_extraction_1.jsonl`, `text_extraction_2.jsonl`,...,`text_extraction_N.jsonl` will be created, where N may be 1, and depends on the total number of inputs and annotations found. The contents of these .JSONL file(s) depend on whether the input used inline text, or documents. If input was inline, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request text snippet's "id" (if specified), followed by input text snippet, and a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated. A single text snippet will be listed only once with all its annotations, and its annotations will never be split across files. If input used documents, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request document proto, followed by its OCR-ed representation in the form of a text snippet, finally followed by a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated and refer, via their indices, to the OCR-ed text snippet. A single document (and its text snippet) will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text snippet failed (partially or completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,..., `errors_N.jsonl` files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps either the "id" : "<id_value>" (in case of inline) or the document proto (in case of document) but here followed by exactly one [`google.rpc.Status`](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only `code` and `message`.
Used in:
The destination of the output.
Required. The Google Cloud Storage location of the directory where the output is to be written to.
Result of the Batch Predict. This message is returned in [response][google.longrunning.Operation.response] of the operation returned by the [PredictionService.BatchPredict][google.cloud.automl.v1.PredictionService.BatchPredict].
Additional domain-specific prediction response metadata. * For Image Object Detection: `max_bounding_box_count` - (int64) At most that many bounding boxes per image could have been returned.
Bounding box matching model metrics for a single intersection-over-union threshold and multiple label match confidence thresholds.
Used in:
Output only. The intersection-over-union threshold value used to compute this metrics entry.
Output only. The mean average precision, most often close to au_prc.
Output only. Metrics for each label-match confidence_threshold from 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99. Precision-recall curve is derived from them.
Metrics for a single confidence threshold.
Used in:
Output only. The confidence threshold value used to compute the metrics.
Output only. Recall under the given confidence threshold.
Output only. Precision under the given confidence threshold.
Output only. The harmonic mean of recall and precision.
A bounding polygon of a detected object on a plane. On output both vertices and normalized_vertices are provided. The polygon is formed by connecting vertices in the order they are listed.
Used in:
,Output only . The bounding polygon normalized vertices.
Contains annotation details specific to classification.
Used in:
Output only. A confidence estimate between 0.0 and 1.0. A higher value means greater confidence that the annotation is positive. If a user approves an annotation as negative or positive, the score value remains unchanged. If a user creates an annotation, the score is 0 for negative or 1 for positive.
Model evaluation metrics for classification problems.
Used in:
Output only. The Area Under Precision-Recall Curve metric. Micro-averaged for the overall evaluation.
Output only. The Area Under Receiver Operating Characteristic curve metric. Micro-averaged for the overall evaluation.
Output only. The Log Loss metric.
Output only. Metrics for each confidence_threshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and position_threshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of position_threshold, but from these no aggregated metrics are computed.
Output only. Confusion matrix of the evaluation. Only set for MULTICLASS classification problems where number of labels is no more than 10. Only set for model level evaluation, not for evaluation per label.
Output only. The annotation spec ids used for this evaluation.
Metrics for a single confidence threshold.
Used in:
Output only. Metrics are computed with an assumption that the model never returns predictions with score lower than this value.
Output only. Metrics are computed with an assumption that the model always returns at most this many predictions (ordered by their score, descendingly), but they all still need to meet the confidence_threshold.
Output only. Recall (True Positive Rate) for the given confidence threshold.
Output only. Precision for the given confidence threshold.
Output only. False Positive Rate for the given confidence threshold.
Output only. The harmonic mean of recall and precision.
Output only. The Recall (True Positive Rate) when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
Output only. The precision when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
Output only. The False Positive Rate when only considering the label that has the highest prediction score and not below the confidence threshold for each example.
Output only. The harmonic mean of [recall_at1][google.cloud.automl.v1.ClassificationEvaluationMetrics.ConfidenceMetricsEntry.recall_at1] and [precision_at1][google.cloud.automl.v1.ClassificationEvaluationMetrics.ConfidenceMetricsEntry.precision_at1].
Output only. The number of model created labels that match a ground truth label.
Output only. The number of model created labels that do not match a ground truth label.
Output only. The number of ground truth labels that are not matched by a model created label.
Output only. The number of labels that were not created by the model, but if they would, they would not match a ground truth label.
Confusion matrix of the model running the classification.
Used in:
,Output only. IDs of the annotation specs used in the confusion matrix.
Output only. Display name of the annotation specs used in the confusion matrix, as they were at the moment of the evaluation.
Output only. Rows in the confusion matrix. The number of rows is equal to the size of `annotation_spec_id`. `row[i].example_count[j]` is the number of examples that have ground truth of the `annotation_spec_id[i]` and are predicted as `annotation_spec_id[j]` by the model being evaluated.
Output only. A row in the confusion matrix.
Used in:
Output only. Value of the specific cell in the confusion matrix. The number of values each row has (i.e. the length of the row) is equal to the length of the `annotation_spec_id` field or, if that one is not populated, length of the [display_name][google.cloud.automl.v1.ClassificationEvaluationMetrics.ConfusionMatrix.display_name] field.
Type of the classification problem.
Used in:
, ,An un-set value of this enum.
At most one label is allowed per example.
Multiple labels are allowed for one example.
Details of CreateDataset operation.
Used in:
(message has no fields)
Details of CreateModel operation.
Used in:
(message has no fields)
A workspace for solving a single, particular machine learning (ML) problem. A workspace contains examples that may be annotated.
Used as response type in: AutoMl.GetDataset, AutoMl.UpdateDataset
Used as field type in:
, ,Required. The dataset metadata that is specific to the problem type.
Metadata for a dataset used for translation.
Metadata for a dataset used for image classification.
Metadata for a dataset used for text classification.
Metadata for a dataset used for image object detection.
Metadata for a dataset used for text extraction.
Metadata for a dataset used for text sentiment.
Output only. The resource name of the dataset. Form: `projects/{project_id}/locations/{location_id}/datasets/{dataset_id}`
Required. The name of the dataset to show in the interface. The name can be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores (_), and ASCII digits 0-9.
User-provided description of the dataset. The description can be up to 25000 characters long.
Output only. The number of examples in the dataset.
Output only. Timestamp when this dataset was created.
Used to perform consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
Optional. The labels with user-defined metadata to organize your dataset. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter. See https://goo.gl/xmQnxf for more information on and examples of labels.
Details of operations that perform deletes of any entities.
Used in:
(message has no fields)
Details of DeployModel operation.
Used in:
(message has no fields)
A structured text document e.g. a PDF.
Used in:
An input config specifying the content of the document.
The plain text version of this document.
Describes the layout of the document. Sorted by [page_number][].
The dimensions of the page in the document.
Number of pages in the document.
Describes the layout information of a [text_segment][google.cloud.automl.v1.Document.Layout.text_segment] in the document.
Used in:
Text Segment that represents a segment in [document_text][google.cloud.automl.v1p1beta.Document.document_text].
Page number of the [text_segment][google.cloud.automl.v1.Document.Layout.text_segment] in the original document, starts from 1.
The position of the [text_segment][google.cloud.automl.v1.Document.Layout.text_segment] in the page. Contains exactly 4 [normalized_vertices][google.cloud.automl.v1p1beta.BoundingPoly.normalized_vertices] and they are connected by edges in the order provided, which will represent a rectangle parallel to the frame. The [NormalizedVertex-s][google.cloud.automl.v1p1beta.NormalizedVertex] are relative to the page. Coordinates are based on top-left as point (0,0).
The type of the [text_segment][google.cloud.automl.v1.Document.Layout.text_segment] in document.
The type of TextSegment in the context of the original document.
Used in:
Should not be used.
The text segment is a token. e.g. word.
The text segment is a paragraph.
The text segment is a form field.
The text segment is the name part of a form field. It will be treated as child of another FORM_FIELD TextSegment if its span is subspan of another TextSegment with type FORM_FIELD.
The text segment is the text content part of a form field. It will be treated as child of another FORM_FIELD TextSegment if its span is subspan of another TextSegment with type FORM_FIELD.
The text segment is a whole table, including headers, and all rows.
The text segment is a table's headers. It will be treated as child of another TABLE TextSegment if its span is subspan of another TextSegment with type TABLE.
The text segment is a row in table. It will be treated as child of another TABLE TextSegment if its span is subspan of another TextSegment with type TABLE.
The text segment is a cell in table. It will be treated as child of another TABLE_ROW TextSegment if its span is subspan of another TextSegment with type TABLE_ROW.
Message that describes dimension of a document.
Used in:
Unit of the dimension.
Width value of the document, works together with the unit.
Height value of the document, works together with the unit.
Unit of the document dimension.
Used in:
Should not be used.
Document dimension is measured in inches.
Document dimension is measured in centimeters.
Document dimension is measured in points. 72 points = 1 inch.
Input configuration of a [Document][google.cloud.automl.v1.Document].
Used in:
The Google Cloud Storage location of the document file. Only a single path should be given. Max supported size: 512MB. Supported extensions: .PDF.
Example data used for training or prediction.
Used in:
,Required. Input only. The example data.
Example image.
Example text.
Example document.
Details of ExportData operation.
Used in:
Output only. Information further describing this export data's output.
Further describes this export data's output. Supplements [OutputConfig][google.cloud.automl.v1.OutputConfig].
Used in:
The output location to which the exported data is written.
The full path of the Google Cloud Storage directory created, into which the exported data is written.
Details of ExportModel operation.
Used in:
Output only. Information further describing the output of this model export.
Further describes the output of model export. Supplements [ModelExportOutputConfig][google.cloud.automl.v1.ModelExportOutputConfig].
Used in:
The full path of the Google Cloud Storage directory created, into which the model will be exported.
The Google Cloud Storage location where the output is to be written to.
Used in:
, ,Required. Google Cloud Storage URI to output directory, up to 2000 characters long. Accepted forms: * Prefix path: gs://bucket/directory The requesting user must have write permission to the bucket. The directory is created if it doesn't exist.
The Google Cloud Storage location for the input content.
Used in:
, , ,Required. Google Cloud Storage URIs to input files, up to 2000 characters long. Accepted forms: * Full object path, e.g. gs://bucket/directory/object.csv
A representation of an image. Only images up to 30MB in size are supported.
Used in:
Input only. The data representing the image. For Predict calls [image_bytes][google.cloud.automl.v1.Image.image_bytes] must be set, as other options are not currently supported by prediction API. You can read the contents of an uploaded image by using the [content_uri][google.cloud.automl.v1.Image.content_uri] field.
Image content represented as a stream of bytes. Note: As with all `bytes` fields, protobuffers use a pure binary representation, whereas JSON representations use base64.
An input config specifying the content of the image.
Output only. HTTP URI to the thumbnail image.
Dataset metadata that is specific to image classification.
Used in:
Required. Type of the classification problem.
Model deployment metadata specific to Image Classification.
Used in:
Input only. The number of nodes to deploy the model on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the model's [node_qps][google.cloud.automl.v1.ImageClassificationModelMetadata.node_qps]. Must be between 1 and 100, inclusive on both ends.
Model metadata for image classification.
Used in:
Optional. The ID of the `base` model. If it is specified, the new model will be created based on the `base` model. Otherwise, the new model will be created from scratch. The `base` model must be in the same `project` and `location` as the new model to create, and have the same `model_type`.
The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual `train_cost` will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using full budget and the stop_reason will be `MODEL_CONVERGED`. Note, node_hour = actual_hour * number_of_nodes_invovled. For model type `cloud`(default), the train budget must be between 8,000 and 800,000 milli node hours, inclusive. The default value is 192, 000 which represents one day in wall time. For model type `mobile-low-latency-1`, `mobile-versatile-1`, `mobile-high-accuracy-1`, `mobile-core-ml-low-latency-1`, `mobile-core-ml-versatile-1`, `mobile-core-ml-high-accuracy-1`, the train budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24, 000 which represents one day in wall time.
Output only. The actual train cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget.
Output only. The reason that this create model operation stopped, e.g. `BUDGET_REACHED`, `MODEL_CONVERGED`.
Optional. Type of the model. The available values are: * `cloud` - Model to be used via prediction calls to AutoML API. This is the default value. * `mobile-low-latency-1` - A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile or edge device with TensorFlow afterwards. Expected to have low latency, but may have lower prediction quality than other models. * `mobile-versatile-1` - A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile or edge device with TensorFlow afterwards. * `mobile-high-accuracy-1` - A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile or edge device with TensorFlow afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models. * `mobile-core-ml-low-latency-1` - A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile device with Core ML afterwards. Expected to have low latency, but may have lower prediction quality than other models. * `mobile-core-ml-versatile-1` - A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile device with Core ML afterwards. * `mobile-core-ml-high-accuracy-1` - A model that, in addition to providing prediction via AutoML API, can also be exported (see [AutoMl.ExportModel][google.cloud.automl.v1.AutoMl.ExportModel]) and used on a mobile device with Core ML afterwards. Expected to have a higher latency, but should also have a higher prediction quality than other models.
Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.
Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the node_qps field.
Input configuration of an [Document][google.cloud.automl.v1.Image].
Used in:
The Google Cloud Storage location of the document file. Only a single path should be given.
Annotation details for image object detection.
Used in:
Output only. The rectangle representing the object location.
Output only. The confidence that this annotation is positive for the parent example, value in [0, 1], higher means higher positivity confidence.
Dataset metadata specific to image object detection.
Used in:
(message has no fields)
Model evaluation metrics for image object detection problems. Evaluates prediction quality of labeled bounding boxes.
Used in:
Output only. The total number of bounding boxes (i.e. summed over all images) the ground truth used to create this evaluation had.
Output only. The bounding boxes match metrics for each Intersection-over-union threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and each label confidence threshold 0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 pair.
Output only. The single metric for bounding boxes evaluation: the mean_average_precision averaged over all bounding_box_metrics_entries.
Model deployment metadata specific to Image Object Detection.
Used in:
Input only. The number of nodes to deploy the model on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the model's [qps_per_node][google.cloud.automl.v1.ImageObjectDetectionModelMetadata.qps_per_node]. Must be between 1 and 100, inclusive on both ends.
Model metadata specific to image object detection.
Used in:
Optional. Type of the model. The available values are: * `cloud-high-accuracy-1` - (default) A model to be used via prediction calls to AutoML API. Expected to have a higher latency, but should also have a higher prediction quality than other models. * `cloud-low-latency-1` - A model to be used via prediction calls to AutoML API. Expected to have low latency, but may have lower prediction quality than other models.
Output only. The number of nodes this model is deployed on. A node is an abstraction of a machine resource, which can handle online prediction QPS as given in the qps_per_node field.
Output only. An approximate number of online prediction QPS that can be supported by this model per each node on which it is deployed.
Output only. The reason that this create model operation stopped, e.g. `BUDGET_REACHED`, `MODEL_CONVERGED`.
The train budget of creating this model, expressed in milli node hours i.e. 1,000 value in this field means 1 node hour. The actual `train_cost` will be equal or less than this value. If further model training ceases to provide any improvements, it will stop without using full budget and the stop_reason will be `MODEL_CONVERGED`. Note, node_hour = actual_hour * number_of_nodes_invovled. For model type `cloud-high-accuracy-1`(default) and `cloud-low-latency-1`, the train budget must be between 20,000 and 900,000 milli node hours, inclusive. The default value is 216, 000 which represents one day in wall time. For model type `mobile-low-latency-1`, `mobile-versatile-1`, `mobile-high-accuracy-1`, `mobile-core-ml-low-latency-1`, `mobile-core-ml-versatile-1`, `mobile-core-ml-high-accuracy-1`, the train budget must be between 1,000 and 100,000 milli node hours, inclusive. The default value is 24, 000 which represents one day in wall time.
Output only. The actual train cost of creating this model, expressed in milli node hours, i.e. 1,000 value in this field means 1 node hour. Guaranteed to not exceed the train budget.
Details of ImportData operation.
Used in:
(message has no fields)
Input configuration for [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData] action. The format of input depends on dataset_metadata the Dataset into which the import is happening has. As input source the [gcs_source][google.cloud.automl.v1.InputConfig.gcs_source] is expected, unless specified otherwise. Additionally any input .CSV file by itself must be 100MB or smaller, unless specified otherwise. If an "example" file (that is, image, video etc.) with identical content (even if it had different `GCS_FILE_PATH`) is mentioned multiple times, then its label, bounding boxes etc. are appended. The same file should be always provided with the same `ML_USE` and `GCS_FILE_PATH`, if it is not, then these values are nondeterministically selected from the given ones. The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are: <h4>AutoML Vision</h4> <div class="ds-selector-tabs"><section><h5>Classification</h5> See [Preparing your training data](https://cloud.google.com/vision/automl/docs/prepare) for more information. CSV file(s) with each line in format: ML_USE,GCS_FILE_PATH,LABEL,LABEL,... * `ML_USE` - Identifies the data set that the current row (file) applies to. This value can be one of the following: * `TRAIN` - Rows in this file are used to train the model. * `TEST` - Rows in this file are used to test the model during training. * `UNASSIGNED` - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing. * `GCS_FILE_PATH` - The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG, .WEBP, .BMP, .TIFF, .ICO. * `LABEL` - A label that identifies the object in the image. For the `MULTICLASS` classification type, at most one `LABEL` is allowed per image. If an image has not yet been labeled, then it should be mentioned just once with no `LABEL`. Some sample rows: TRAIN,gs://folder/image1.jpg,daisy TEST,gs://folder/image2.jpg,dandelion,tulip,rose UNASSIGNED,gs://folder/image3.jpg,daisy UNASSIGNED,gs://folder/image4.jpg </section><section><h5>Object Detection</h5> See [Preparing your training data](https://cloud.google.com/vision/automl/object-detection/docs/prepare) for more information. A CSV file(s) with each line in format: ML_USE,GCS_FILE_PATH,[LABEL],(BOUNDING_BOX | ,,,,,,,) * `ML_USE` - Identifies the data set that the current row (file) applies to. This value can be one of the following: * `TRAIN` - Rows in this file are used to train the model. * `TEST` - Rows in this file are used to test the model during training. * `UNASSIGNED` - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing. * `GCS_FILE_PATH` - The Google Cloud Storage location of an image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG. Each image is assumed to be exhaustively labeled. * `LABEL` - A label that identifies the object in the image specified by the `BOUNDING_BOX`. * `BOUNDING BOX` - The vertices of an object in the example image. The minimum allowed `BOUNDING_BOX` edge length is 0.01, and no more than 500 `BOUNDING_BOX` instances per image are allowed (one `BOUNDING_BOX` per line). If an image has no looked for objects then it should be mentioned just once with no LABEL and the ",,,,,,," in place of the `BOUNDING_BOX`. **Four sample rows:** TRAIN,gs://folder/image1.png,car,0.1,0.1,,,0.3,0.3,, TRAIN,gs://folder/image1.png,bike,.7,.6,,,.8,.9,, UNASSIGNED,gs://folder/im2.png,car,0.1,0.1,0.2,0.1,0.2,0.3,0.1,0.3 TEST,gs://folder/im3.png,,,,,,,,, </section> </div> <h4>AutoML Natural Language</h4> <div class="ds-selector-tabs"><section><h5>Entity Extraction</h5> See [Preparing your training data](/natural-language/automl/entity-analysis/docs/prepare) for more information. One or more CSV file(s) with each line in the following format: ML_USE,GCS_FILE_PATH * `ML_USE` - Identifies the data set that the current row (file) applies to. This value can be one of the following: * `TRAIN` - Rows in this file are used to train the model. * `TEST` - Rows in this file are used to test the model during training. * `UNASSIGNED` - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing.. * `GCS_FILE_PATH` - a Identifies JSON Lines (.JSONL) file stored in Google Cloud Storage that contains in-line text in-line as documents for model training. After the training data set has been determined from the `TRAIN` and `UNASSIGNED` CSV files, the training data is divided into train and validation data sets. 70% for training and 30% for validation. For example: TRAIN,gs://folder/file1.jsonl VALIDATE,gs://folder/file2.jsonl TEST,gs://folder/file3.jsonl **In-line JSONL files** In-line .JSONL files contain, per line, a JSON document that wraps a [`text_snippet`][google.cloud.automl.v1.TextSnippet] field followed by one or more [`annotations`][google.cloud.automl.v1.AnnotationPayload] fields, which have `display_name` and `text_extraction` fields to describe the entity from the text snippet. Multiple JSON documents can be separated using line breaks (\n). The supplied text must be annotated exhaustively. For example, if you include the text "horse", but do not label it as "animal", then "horse" is assumed to not be an "animal". Any given text snippet content must have 30,000 characters or less, and also be UTF-8 NFC encoded. ASCII is accepted as it is UTF-8 NFC encoded. For example: { "text_snippet": { "content": "dog car cat" }, "annotations": [ { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 0, "end_offset": 2} } }, { "display_name": "vehicle", "text_extraction": { "text_segment": {"start_offset": 4, "end_offset": 6} } }, { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 8, "end_offset": 10} } } ] }\n { "text_snippet": { "content": "This dog is good." }, "annotations": [ { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 5, "end_offset": 7} } } ] } **JSONL files that reference documents** .JSONL files contain, per line, a JSON document that wraps a `input_config` that contains the path to a source PDF document. Multiple JSON documents can be separated using line breaks (\n). For example: { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ] } } } }\n { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document2.pdf" ] } } } } **In-line JSONL files with PDF layout information** **Note:** You can only annotate PDF files using the UI. The format described below applies to annotated PDF files exported using the UI or `exportData`. In-line .JSONL files for PDF documents contain, per line, a JSON document that wraps a `document` field that provides the textual content of the PDF document and the layout information. For example: { "document": { "document_text": { "content": "dog car cat" } "layout": [ { "text_segment": { "start_offset": 0, "end_offset": 11, }, "page_number": 1, "bounding_poly": { "normalized_vertices": [ {"x": 0.1, "y": 0.1}, {"x": 0.1, "y": 0.3}, {"x": 0.3, "y": 0.3}, {"x": 0.3, "y": 0.1}, ], }, "text_segment_type": TOKEN, } ], "document_dimensions": { "width": 8.27, "height": 11.69, "unit": INCH, } "page_count": 3, }, "annotations": [ { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 0, "end_offset": 3} } }, { "display_name": "vehicle", "text_extraction": { "text_segment": {"start_offset": 4, "end_offset": 7} } }, { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 8, "end_offset": 11} } }, ], </section><section><h5>Classification</h5> See [Preparing your training data](https://cloud.google.com/natural-language/automl/docs/prepare) for more information. One or more CSV file(s) with each line in the following format: ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),LABEL,LABEL,... * `ML_USE` - Identifies the data set that the current row (file) applies to. This value can be one of the following: * `TRAIN` - Rows in this file are used to train the model. * `TEST` - Rows in this file are used to test the model during training. * `UNASSIGNED` - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing. * `TEXT_SNIPPET` and `GCS_FILE_PATH` are distinguished by a pattern. If the column content is a valid Google Cloud Storage file path, that is, prefixed by "gs://", it is treated as a `GCS_FILE_PATH`. Otherwise, if the content is enclosed in double quotes (""), it is treated as a `TEXT_SNIPPET`. For `GCS_FILE_PATH`, the path must lead to a file with supported extension and UTF-8 encoding, for example, "gs://folder/content.txt" AutoML imports the file content as a text snippet. For `TEXT_SNIPPET`, AutoML imports the column content excluding quotes. In both cases, size of the content must be 10MB or less in size. For zip files, the size of each file inside the zip must be 10MB or less in size. For the `MULTICLASS` classification type, at most one `LABEL` is allowed. The `ML_USE` and `LABEL` columns are optional. Supported file extensions: .TXT, .PDF, .ZIP A maximum of 100 unique labels are allowed per CSV row. Sample rows: TRAIN,"They have bad food and very rude",RudeService,BadFood gs://folder/content.txt,SlowService TEST,gs://folder/document.pdf VALIDATE,gs://folder/text_files.zip,BadFood </section><section><h5>Sentiment Analysis</h5> See [Preparing your training data](https://cloud.google.com/natural-language/automl/docs/prepare) for more information. CSV file(s) with each line in format: ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),SENTIMENT * `ML_USE` - Identifies the data set that the current row (file) applies to. This value can be one of the following: * `TRAIN` - Rows in this file are used to train the model. * `TEST` - Rows in this file are used to test the model during training. * `UNASSIGNED` - Rows in this file are not categorized. They are Automatically divided into train and test data. 80% for training and 20% for testing. * `TEXT_SNIPPET` and `GCS_FILE_PATH` are distinguished by a pattern. If the column content is a valid Google Cloud Storage file path, that is, prefixed by "gs://", it is treated as a `GCS_FILE_PATH`. Otherwise, if the content is enclosed in double quotes (""), it is treated as a `TEXT_SNIPPET`. For `GCS_FILE_PATH`, the path must lead to a file with supported extension and UTF-8 encoding, for example, "gs://folder/content.txt" AutoML imports the file content as a text snippet. For `TEXT_SNIPPET`, AutoML imports the column content excluding quotes. In both cases, size of the content must be 128kB or less in size. For zip files, the size of each file inside the zip must be 128kB or less in size. The `ML_USE` and `SENTIMENT` columns are optional. Supported file extensions: .TXT, .PDF, .ZIP * `SENTIMENT` - An integer between 0 and Dataset.text_sentiment_dataset_metadata.sentiment_max (inclusive). Describes the ordinal of the sentiment - higher value means a more positive sentiment. All the values are completely relative, i.e. neither 0 needs to mean a negative or neutral sentiment nor sentiment_max needs to mean a positive one - it is just required that 0 is the least positive sentiment in the data, and sentiment_max is the most positive one. The SENTIMENT shouldn't be confused with "score" or "magnitude" from the previous Natural Language Sentiment Analysis API. All SENTIMENT values between 0 and sentiment_max must be represented in the imported data. On prediction the same 0 to sentiment_max range will be used. The difference between neighboring sentiment values needs not to be uniform, e.g. 1 and 2 may be similar whereas the difference between 2 and 3 may be large. Sample rows: TRAIN,"@freewrytin this is way too good for your product",2 gs://folder/content.txt,3 TEST,gs://folder/document.pdf VALIDATE,gs://folder/text_files.zip,2 </section> </div> **Input field definitions:** `ML_USE` : ("TRAIN" | "VALIDATE" | "TEST" | "UNASSIGNED") Describes how the given example (file) should be used for model training. "UNASSIGNED" can be used when user has no preference. `GCS_FILE_PATH` : The path to a file on Google Cloud Storage. For example, "gs://folder/image1.png". `LABEL` : A display name of an object on an image, video etc., e.g. "dog". Must be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores(_), and ASCII digits 0-9. For each label an AnnotationSpec is created which display_name becomes the label; AnnotationSpecs are given back in predictions. `BOUNDING_BOX` : (`VERTEX,VERTEX,VERTEX,VERTEX` | `VERTEX,,,VERTEX,,`) A rectangle parallel to the frame of the example (image, video). If 4 vertices are given they are connected by edges in the order provided, if 2 are given they are recognized as diagonally opposite vertices of the rectangle. `VERTEX` : (`COORDINATE,COORDINATE`) First coordinate is horizontal (x), the second is vertical (y). `COORDINATE` : A float in 0 to 1 range, relative to total length of image or video in given dimension. For fractions the leading non-decimal 0 can be omitted (i.e. 0.3 = .3). Point 0,0 is in top left. `TEXT_SNIPPET` : The content of a text snippet, UTF-8 encoded, enclosed within double quotes (""). `DOCUMENT` : A field that provides the textual content with document and the layout information. **Errors:** If any of the provided CSV files can't be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and nothing is imported. Regardless of overall success or failure the per-row failures, up to a certain count cap, is listed in Operation.metadata.partial_failures.
Used in:
The source of the input.
The Google Cloud Storage location for the input content. For [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData], `gcs_source` points to a CSV file with a structure described in [InputConfig][google.cloud.automl.v1.InputConfig].
Additional domain-specific parameters describing the semantic of the imported data, any string must be up to 25000 characters long.
API proto representing a trained machine learning model.
Used as response type in: AutoMl.GetModel, AutoMl.UpdateModel
Used as field type in:
, ,Required. The model metadata that is specific to the problem type. Must match the metadata type of the dataset used to train the model.
Metadata for translation models.
Metadata for image classification models.
Metadata for text classification models.
Metadata for image object detection models.
Metadata for text extraction models.
Metadata for text sentiment models.
Output only. Resource name of the model. Format: `projects/{project_id}/locations/{location_id}/models/{model_id}`
Required. The name of the model to show in the interface. The name can be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores (_), and ASCII digits 0-9. It must start with a letter.
Required. The resource ID of the dataset used to create the model. The dataset must come from the same ancestor project and location.
Output only. Timestamp when the model training finished and can be used for prediction.
Output only. Timestamp when this model was last updated.
Output only. Deployment state of the model. A model can only serve prediction requests after it gets deployed.
Used to perform a consistent read-modify-write updates. If not set, a blind "overwrite" update happens.
Optional. The labels with user-defined metadata to organize your model. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. Label values are optional. Label keys must start with a letter. See https://goo.gl/xmQnxf for more information on and examples of labels.
Deployment state of the model.
Used in:
Should not be used, an un-set enum has this value by default.
Model is deployed.
Model is not deployed.
Evaluation results of a model.
Used as response type in: AutoMl.GetModelEvaluation
Used as field type in:
Output only. Problem type specific evaluation metrics.
Model evaluation metrics for image, text classification.
Model evaluation metrics for translation.
Model evaluation metrics for image object detection.
Evaluation metrics for text sentiment models.
Evaluation metrics for text extraction models.
Output only. Resource name of the model evaluation. Format: `projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}`
Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation.
Output only. The value of [display_name][google.cloud.automl.v1.AnnotationSpec.display_name] at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model's trainings.
Output only. Timestamp when this model evaluation was created.
Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model. For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the [annotation_spec_id][google.cloud.automl.v1.ModelEvaluation.annotation_spec_id].
Output configuration for ModelExport Action.
Used in:
The destination of the output.
Required. The Google Cloud Storage location where the model is to be written to. This location may only be set for the following model formats: "tflite", "edgetpu_tflite", "tf_saved_model", "tf_js", "core_ml". Under the directory given as the destination a new one with name "model-export-<model-display-name>-<timestamp-of-export-call>", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format, will be created. Inside the model and any of its supporting files will be written.
The format in which the model must be exported. The available, and default, formats depend on the problem and model type (if given problem and type combination doesn't have a format listed, it means its models are not exportable): * For Image Classification mobile-low-latency-1, mobile-versatile-1, mobile-high-accuracy-1: "tflite" (default), "edgetpu_tflite", "tf_saved_model", "tf_js". * For Image Classification mobile-core-ml-low-latency-1, mobile-core-ml-versatile-1, mobile-core-ml-high-accuracy-1: "core_ml" (default). * For Image Object Detection mobile-low-latency-1, mobile-versatile-1, mobile-high-accuracy-1: "tflite", "tf_saved_model", "tf_js". Formats description: * tflite - Used for Android mobile devices. * edgetpu_tflite - Used for [Edge TPU](https://cloud.google.com/edge-tpu/) devices. * tf_saved_model - A tensorflow model in SavedModel format. * tf_js - A [TensorFlow.js](https://www.tensorflow.org/js) model that can be used in the browser and in Node.js using JavaScript.x` * core_ml - Used for iOS mobile devices.
Additional model-type and format specific parameters describing the requirements for the to be exported model files, any string must be up to 25000 characters long.
A vertex represents a 2D point in the image. The normalized vertex coordinates are between 0 to 1 fractions relative to the original plane (image, video). E.g. if the plane (e.g. whole image) would have size 10 x 20 then a point with normalized coordinates (0.1, 0.3) would be at the position (1, 6) on that plane.
Used in:
Required. Horizontal coordinate.
Required. Vertical coordinate.
Metadata used across all long running operations returned by AutoML API.
Ouptut only. Details of specific operation. Even if this field is empty, the presence allows to distinguish different types of operations.
Details of a Delete operation.
Details of a DeployModel operation.
Details of an UndeployModel operation.
Details of CreateModel operation.
Details of CreateDataset operation.
Details of ImportData operation.
Details of BatchPredict operation.
Details of ExportData operation.
Details of ExportModel operation.
Output only. Progress of operation. Range: [0, 100]. Not used currently.
Output only. Partial failures encountered. E.g. single files that couldn't be read. This field should never exceed 20 entries. Status details field will contain standard GCP error details.
Output only. Time when the operation was created.
Output only. Time when the operation was updated for the last time.
Output configuration for ExportData. As destination the [gcs_destination][google.cloud.automl.v1.OutputConfig.gcs_destination] must be set unless specified otherwise for a domain. If gcs_destination is set then in the given directory a new directory is created. Its name will be "export_data-<dataset-display-name>-<timestamp-of-export-call>", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. Only ground truth annotations are exported (not approved annotations are not exported). The outputs correspond to how the data was imported, and may be used as input to import data. The output formats are represented as EBNF with literal commas and same non-terminal symbols definitions are these in import data's [InputConfig][google.cloud.automl.v1.InputConfig]: * For Image Classification: CSV file(s) `image_classification_1.csv`, `image_classification_2.csv`,...,`image_classification_N.csv`with each line in format: ML_USE,GCS_FILE_PATH,LABEL,LABEL,... where GCS_FILE_PATHs point at the original, source locations of the imported images. For MULTICLASS classification type, there can be at most one LABEL per example. * For Image Object Detection: CSV file(s) `image_object_detection_1.csv`, `image_object_detection_2.csv`,...,`image_object_detection_N.csv` with each line in format: ML_USE,GCS_FILE_PATH,[LABEL],(BOUNDING_BOX | ,,,,,,,) where GCS_FILE_PATHs point at the original, source locations of the imported images. * For Text Classification: In the created directory CSV file(s) `text_classification_1.csv`, `text_classification_2.csv`, ...,`text_classification_N.csv` will be created where N depends on the total number of examples exported. Each line in the CSV is of the format: ML_USE,GCS_FILE_PATH,LABEL,LABEL,... where GCS_FILE_PATHs point at the exported .txt files containing the text content of the imported example. For MULTICLASS classification type, there will be at most one LABEL per example. * For Text Sentiment: In the created directory CSV file(s) `text_sentiment_1.csv`, `text_sentiment_2.csv`, ...,`text_sentiment_N.csv` will be created where N depends on the total number of examples exported. Each line in the CSV is of the format: ML_USE,GCS_FILE_PATH,SENTIMENT where GCS_FILE_PATHs point at the exported .txt files containing the text content of the imported example. * For Text Extraction: CSV file `text_extraction.csv`, with each line in format: ML_USE,GCS_FILE_PATH GCS_FILE_PATH leads to a .JSONL (i.e. JSON Lines) file which contains, per line, a proto that wraps a TextSnippet proto (in json representation) followed by AnnotationPayload protos (called annotations). If initially documents had been imported, the JSONL will point at the original, source locations of the imported documents. * For Translation: CSV file `translation.csv`, with each line in format: ML_USE,GCS_FILE_PATH GCS_FILE_PATH leads to a .TSV file which describes examples that have given ML_USE, using the following row format per line: TEXT_SNIPPET (in source language) \t TEXT_SNIPPET (in target language)
Used in:
The destination of the output.
Required. The Google Cloud Storage location where the output is to be written to. For Image Object Detection, Text Extraction in the given directory a new directory will be created with name: export_data-<dataset-display-name>-<timestamp-of-export-call> where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. All export output will be written into that directory.
Dataset metadata for classification.
Used in:
Required. Type of the classification problem.
Model metadata that is specific to text classification.
Used in:
Output only. Classification type of the dataset used to train this model.
Annotation for identifying spans of text.
Used in:
Required. Text extraction annotations can either be a text segment or a text relation.
An entity annotation will set this, which is the part of the original text to which the annotation pertains.
Output only. A confidence estimate between 0.0 and 1.0. A higher value means greater confidence in correctness of the annotation.
Dataset metadata that is specific to text extraction
Used in:
(message has no fields)
Model evaluation metrics for text extraction problems.
Used in:
Output only. The Area under precision recall curve metric.
Output only. Metrics that have confidence thresholds. Precision-recall curve can be derived from it.
Metrics for a single confidence threshold.
Used in:
Output only. The confidence threshold value used to compute the metrics. Only annotations with score of at least this threshold are considered to be ones the model would return.
Output only. Recall under the given confidence threshold.
Output only. Precision under the given confidence threshold.
Output only. The harmonic mean of recall and precision.
Model metadata that is specific to text extraction.
Used in:
(message has no fields)
A contiguous part of a text (string), assuming it has an UTF-8 NFC encoding.
Used in:
,Output only. The content of the TextSegment.
Required. Zero-based character index of the first character of the text segment (counting characters from the beginning of the text).
Required. Zero-based character index of the first character past the end of the text segment (counting character from the beginning of the text). The character at the end_offset is NOT included in the text segment.
Contains annotation details specific to text sentiment.
Used in:
Output only. The sentiment with the semantic, as given to the [AutoMl.ImportData][google.cloud.automl.v1.AutoMl.ImportData] when populating the dataset from which the model used for the prediction had been trained. The sentiment values are between 0 and Dataset.text_sentiment_dataset_metadata.sentiment_max (inclusive), with higher value meaning more positive sentiment. They are completely relative, i.e. 0 means least positive sentiment and sentiment_max means the most positive from the sentiments present in the train data. Therefore e.g. if train data had only negative sentiment, then sentiment_max, would be still negative (although least negative). The sentiment shouldn't be confused with "score" or "magnitude" from the previous Natural Language Sentiment Analysis API.
Dataset metadata for text sentiment.
Used in:
Required. A sentiment is expressed as an integer ordinal, where higher value means a more positive sentiment. The range of sentiments that will be used is between 0 and sentiment_max (inclusive on both ends), and all the values in the range must be represented in the dataset before a model can be created. sentiment_max value must be between 1 and 10 (inclusive).
Model evaluation metrics for text sentiment problems.
Used in:
Output only. Precision.
Output only. Recall.
Output only. The harmonic mean of recall and precision.
Output only. Mean absolute error. Only set for the overall model evaluation, not for evaluation of a single annotation spec.
Output only. Mean squared error. Only set for the overall model evaluation, not for evaluation of a single annotation spec.
Output only. Linear weighted kappa. Only set for the overall model evaluation, not for evaluation of a single annotation spec.
Output only. Quadratic weighted kappa. Only set for the overall model evaluation, not for evaluation of a single annotation spec.
Output only. Confusion matrix of the evaluation. Only set for the overall model evaluation, not for evaluation of a single annotation spec.
Model metadata that is specific to text sentiment.
Used in:
(message has no fields)
A representation of a text snippet.
Used in:
, ,Required. The content of the text snippet as a string. Up to 250000 characters long.
Optional. The format of [content][google.cloud.automl.v1.TextSnippet.content]. Currently the only two allowed values are "text/html" and "text/plain". If left blank, the format is automatically determined from the type of the uploaded [content][google.cloud.automl.v1.TextSnippet.content].
Output only. HTTP URI where you can download the content.
Annotation details specific to translation.
Used in:
Output only . The translated content.
Dataset metadata that is specific to translation.
Used in:
Required. The BCP-47 language code of the source language.
Required. The BCP-47 language code of the target language.
Evaluation metrics for the dataset.
Used in:
Output only. BLEU score.
Output only. BLEU score for base model.
Model metadata that is specific to translation.
Used in:
The resource name of the model to use as a baseline to train the custom model. If unset, we use the default base model provided by Google Translate. Format: `projects/{project_id}/locations/{location_id}/models/{model_id}`
Output only. Inferred from the dataset. The source languge (The BCP-47 language code) that is used for training.
Output only. The target languge (The BCP-47 language code) that is used for training.
Details of UndeployModel operation.
Used in:
(message has no fields)