package google.cloud.automl.v1beta1

Mouse Melon logoGet desktop application:
View/edit binary Protocol Buffers messages

service AutoMl

service.proto:56

AutoML Server API. The resource names are assigned by the server. The server never reuses names that it has created after the resources with those names are deleted. An ID of a resource is the last element of the item's resource name. For `projects/{project_id}/locations/{location_id}/datasets/{dataset_id}`, then the id for the item is `{dataset_id}`. Currently the only supported `location_id` is "us-central1". On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.

service PredictionService

prediction_service.proto:39

AutoML Prediction API. On any input that is documented to expect a string parameter in snake_case or kebab-case, either of those cases is accepted.

message AnnotationPayload

annotation_payload.proto:36

Contains annotation information that is relevant to AutoML.

Used in: PredictResponse

message ArrayStats

data_stats.proto:131

The data statistics of a series of ARRAY values.

Used in: DataStats

message BatchPredictInputConfig

io.proto:622

Input configuration for BatchPredict Action. The format of input depends on the ML problem of the model used for prediction. As input source the [gcs_source][google.cloud.automl.v1beta1.InputConfig.gcs_source] is expected, unless specified otherwise. The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are: * For Image Classification: CSV file(s) with each line having just a single column: GCS_FILE_PATH which leads to image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG. This path is treated as the ID in the Batch predict output. Three sample rows: gs://folder/image1.jpeg gs://folder/image2.gif gs://folder/image3.png * For Image Object Detection: CSV file(s) with each line having just a single column: GCS_FILE_PATH which leads to image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG. This path is treated as the ID in the Batch predict output. Three sample rows: gs://folder/image1.jpeg gs://folder/image2.gif gs://folder/image3.png * For Video Classification: CSV file(s) with each line in format: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END GCS_FILE_PATH leads to video of up to 50GB in size and up to 3h duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI. TIME_SEGMENT_START and TIME_SEGMENT_END must be within the length of the video, and end has to be after the start. Three sample rows: gs://folder/video1.mp4,10,40 gs://folder/video1.mp4,20,60 gs://folder/vid2.mov,0,inf * For Video Object Tracking: CSV file(s) with each line in format: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END GCS_FILE_PATH leads to video of up to 50GB in size and up to 3h duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI. TIME_SEGMENT_START and TIME_SEGMENT_END must be within the length of the video, and end has to be after the start. Three sample rows: gs://folder/video1.mp4,10,240 gs://folder/video1.mp4,300,360 gs://folder/vid2.mov,0,inf * For Text Classification: CSV file(s) with each line having just a single column: GCS_FILE_PATH | TEXT_SNIPPET Any given text file can have size upto 128kB. Any given text snippet content must have 60,000 characters or less. Three sample rows: gs://folder/text1.txt "Some text content to predict" gs://folder/text3.pdf Supported file extensions: .txt, .pdf * For Text Sentiment: CSV file(s) with each line having just a single column: GCS_FILE_PATH | TEXT_SNIPPET Any given text file can have size upto 128kB. Any given text snippet content must have 500 characters or less. Three sample rows: gs://folder/text1.txt "Some text content to predict" gs://folder/text3.pdf Supported file extensions: .txt, .pdf * For Text Extraction .JSONL (i.e. JSON Lines) file(s) which either provide text in-line or as documents (for a single BatchPredict call only one of the these formats may be used). The in-line .JSONL file(s) contain per line a proto that wraps a temporary user-assigned TextSnippet ID (string up to 2000 characters long) called "id", a TextSnippet proto (in json representation) and zero or more TextFeature protos. Any given text snippet content must have 30,000 characters or less, and also be UTF-8 NFC encoded (ASCII already is). The IDs provided should be unique. The document .JSONL file(s) contain, per line, a proto that wraps a Document proto with input_config set. Only PDF documents are supported now, and each document must be up to 2MB large. Any given .JSONL file must be 100MB or smaller, and no more than 20 files may be given. Sample in-line JSON Lines file (presented here with artificial line breaks, but the only actual line break is denoted by \n): { "id": "my_first_id", "text_snippet": { "content": "dog car cat"}, "text_features": [ { "text_segment": {"start_offset": 4, "end_offset": 6}, "structural_type": PARAGRAPH, "bounding_poly": { "normalized_vertices": [ {"x": 0.1, "y": 0.1}, {"x": 0.1, "y": 0.3}, {"x": 0.3, "y": 0.3}, {"x": 0.3, "y": 0.1}, ] }, } ], }\n { "id": "2", "text_snippet": { "content": "An elaborate content", "mime_type": "text/plain" } } Sample document JSON Lines file (presented here with artificial line breaks, but the only actual line break is denoted by \n).: { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ] } } } }\n { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document2.pdf" ] } } } } * For Tables: Either [gcs_source][google.cloud.automl.v1beta1.InputConfig.gcs_source] or [bigquery_source][google.cloud.automl.v1beta1.InputConfig.bigquery_source]. GCS case: CSV file(s), each by itself 10GB or smaller and total size must be 100GB or smaller, where first file must have a header containing column names. If the first row of a subsequent file is the same as the header, then it is also treated as a header. All other rows contain values for the corresponding columns. The column names must contain the model's [input_feature_column_specs'][google.cloud.automl.v1beta1.TablesModelMetadata.input_feature_column_specs] [display_name-s][google.cloud.automl.v1beta1.ColumnSpec.display_name] (order doesn't matter). The columns corresponding to the model's input feature column specs must contain values compatible with the column spec's data types. Prediction on all the rows, i.e. the CSV lines, will be attempted. For FORECASTING [prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type]: all columns having [TIME_SERIES_AVAILABLE_PAST_ONLY][google.cloud.automl.v1beta1.ColumnSpec.ForecastingMetadata.ColumnType] type will be ignored. First three sample rows of a CSV file: "First Name","Last Name","Dob","Addresses" "John","Doe","1968-01-22","[{"status":"current","address":"123_First_Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456_Main_Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]" "Jane","Doe","1980-10-16","[{"status":"current","address":"789_Any_Avenue","city":"Albany","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321_Main_Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]} BigQuery case: An URI of a BigQuery table. The user data size of the BigQuery table must be 100GB or smaller. The column names must contain the model's [input_feature_column_specs'][google.cloud.automl.v1beta1.TablesModelMetadata.input_feature_column_specs] [display_name-s][google.cloud.automl.v1beta1.ColumnSpec.display_name] (order doesn't matter). The columns corresponding to the model's input feature column specs must contain values compatible with the column spec's data types. Prediction on all the rows of the table will be attempted. For FORECASTING [prediction_type][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type]: all columns having [TIME_SERIES_AVAILABLE_PAST_ONLY][google.cloud.automl.v1beta1.ColumnSpec.ForecastingMetadata.ColumnType] type will be ignored. Definitions: GCS_FILE_PATH = A path to file on GCS, e.g. "gs://folder/video.avi". TEXT_SNIPPET = A content of a text snippet, UTF-8 encoded, enclosed within double quotes ("") TIME_SEGMENT_START = TIME_OFFSET Expresses a beginning, inclusive, of a time segment within an example that has a time dimension (e.g. video). TIME_SEGMENT_END = TIME_OFFSET Expresses an end, exclusive, of a time segment within an example that has a time dimension (e.g. video). TIME_OFFSET = A number of seconds as measured from the start of an example (e.g. video). Fractions are allowed, up to a microsecond precision. "inf" is allowed and it means the end of the example. Errors: If any of the provided CSV files can't be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and prediction does not happen. Regardless of overall success or failure the per-row failures, up to a certain count cap, will be listed in Operation.metadata.partial_failures.

Used in: BatchPredictOperationMetadata, BatchPredictRequest

message BatchPredictOperationMetadata

operations.proto:123

Details of BatchPredict operation.

Used in: OperationMetadata

message BatchPredictOperationMetadata.BatchPredictOutputInfo

operations.proto:128

Further describes this batch predict's output. Supplements [BatchPredictOutputConfig][google.cloud.automl.v1beta1.BatchPredictOutputConfig].

Used in: BatchPredictOperationMetadata

message BatchPredictOutputConfig

io.proto:971

Output configuration for BatchPredict Action. As destination the [gcs_destination][google.cloud.automl.v1beta1.BatchPredictOutputConfig.gcs_destination] must be set unless specified otherwise for a domain. If gcs_destination is set then in the given directory a new directory is created. Its name will be "prediction-<model-display-name>-<timestamp-of-prediction-call>", where timestamp is in YYYY-MM-DDThh:mm:ss.sssZ ISO-8601 format. The contents of it depends on the ML problem the predictions are made for. * For Image Classification: In the created directory files `image_classification_1.jsonl`, `image_classification_2.jsonl`,...,`image_classification_N.jsonl` will be created, where N may be 1, and depends on the total number of the successfully predicted images and annotations. A single image will be listed only once with all its annotations, and its annotations will never be split across files. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps image's "ID" : "<id_value>" followed by a list of zero or more AnnotationPayload protos (called annotations), which have classification detail populated. If prediction for any image failed (partially or completely), then an additional `errors_1.jsonl`, `errors_2.jsonl`,..., `errors_N.jsonl` files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps the same "ID" : "<id_value>" but here followed by exactly one [`google.rpc.Status`](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only `code` and `message`fields. * For Image Object Detection: In the created directory files `image_object_detection_1.jsonl`, `image_object_detection_2.jsonl`,...,`image_object_detection_N.jsonl` will be created, where N may be 1, and depends on the total number of the successfully predicted images and annotations. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps image's "ID" : "<id_value>" followed by a list of zero or more AnnotationPayload protos (called annotations), which have image_object_detection detail populated. A single image will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any image failed (partially or completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,..., `errors_N.jsonl` files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps the same "ID" : "<id_value>" but here followed by exactly one [`google.rpc.Status`](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only `code` and `message`fields. * For Video Classification: In the created directory a video_classification.csv file, and a .JSON file per each video classification requested in the input (i.e. each line in given CSV(s)), will be created. The format of video_classification.csv is: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END,JSON_FILE_NAME,STATUS where: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END = matches 1 to 1 the prediction input lines (i.e. video_classification.csv has precisely the same number of lines as the prediction input had.) JSON_FILE_NAME = Name of .JSON file in the output directory, which contains prediction responses for the video time segment. STATUS = "OK" if prediction completed successfully, or an error code with message otherwise. If STATUS is not "OK" then the .JSON file for that line may not exist or be empty. Each .JSON file, assuming STATUS is "OK", will contain a list of AnnotationPayload protos in JSON format, which are the predictions for the video time segment the file is assigned to in the video_classification.csv. All AnnotationPayload protos will have video_classification field set, and will be sorted by video_classification.type field (note that the returned types are governed by `classifaction_types` parameter in [PredictService.BatchPredictRequest.params][]). * For Video Object Tracking: In the created directory a video_object_tracking.csv file will be created, and multiple files video_object_trackinng_1.json, video_object_trackinng_2.json,..., video_object_trackinng_N.json, where N is the number of requests in the input (i.e. the number of lines in given CSV(s)). The format of video_object_tracking.csv is: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END,JSON_FILE_NAME,STATUS where: GCS_FILE_PATH,TIME_SEGMENT_START,TIME_SEGMENT_END = matches 1 to 1 the prediction input lines (i.e. video_object_tracking.csv has precisely the same number of lines as the prediction input had.) JSON_FILE_NAME = Name of .JSON file in the output directory, which contains prediction responses for the video time segment. STATUS = "OK" if prediction completed successfully, or an error code with message otherwise. If STATUS is not "OK" then the .JSON file for that line may not exist or be empty. Each .JSON file, assuming STATUS is "OK", will contain a list of AnnotationPayload protos in JSON format, which are the predictions for each frame of the video time segment the file is assigned to in video_object_tracking.csv. All AnnotationPayload protos will have video_object_tracking field set. * For Text Classification: In the created directory files `text_classification_1.jsonl`, `text_classification_2.jsonl`,...,`text_classification_N.jsonl` will be created, where N may be 1, and depends on the total number of inputs and annotations found. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps input text snippet or input text file and a list of zero or more AnnotationPayload protos (called annotations), which have classification detail populated. A single text snippet or file will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text snippet or file failed (partially or completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,..., `errors_N.jsonl` files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps input text snippet or input text file followed by exactly one [`google.rpc.Status`](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only `code` and `message`. * For Text Sentiment: In the created directory files `text_sentiment_1.jsonl`, `text_sentiment_2.jsonl`,...,`text_sentiment_N.jsonl` will be created, where N may be 1, and depends on the total number of inputs and annotations found. Each .JSONL file will contain, per line, a JSON representation of a proto that wraps input text snippet or input text file and a list of zero or more AnnotationPayload protos (called annotations), which have text_sentiment detail populated. A single text snippet or file will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text snippet or file failed (partially or completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,..., `errors_N.jsonl` files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps input text snippet or input text file followed by exactly one [`google.rpc.Status`](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only `code` and `message`. * For Text Extraction: In the created directory files `text_extraction_1.jsonl`, `text_extraction_2.jsonl`,...,`text_extraction_N.jsonl` will be created, where N may be 1, and depends on the total number of inputs and annotations found. The contents of these .JSONL file(s) depend on whether the input used inline text, or documents. If input was inline, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request text snippet's "id" (if specified), followed by input text snippet, and a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated. A single text snippet will be listed only once with all its annotations, and its annotations will never be split across files. If input used documents, then each .JSONL file will contain, per line, a JSON representation of a proto that wraps given in request document proto, followed by its OCR-ed representation in the form of a text snippet, finally followed by a list of zero or more AnnotationPayload protos (called annotations), which have text_extraction detail populated and refer, via their indices, to the OCR-ed text snippet. A single document (and its text snippet) will be listed only once with all its annotations, and its annotations will never be split across files. If prediction for any text snippet failed (partially or completely), then additional `errors_1.jsonl`, `errors_2.jsonl`,..., `errors_N.jsonl` files will be created (N depends on total number of failed predictions). These files will have a JSON representation of a proto that wraps either the "id" : "<id_value>" (in case of inline) or the document proto (in case of document) but here followed by exactly one [`google.rpc.Status`](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) containing only `code` and `message`. * For Tables: Output depends on whether [gcs_destination][google.cloud.automl.v1beta1.BatchPredictOutputConfig.gcs_destination] or [bigquery_destination][google.cloud.automl.v1beta1.BatchPredictOutputConfig.bigquery_destination] is set (either is allowed). GCS case: In the created directory files `tables_1.csv`, `tables_2.csv`,..., `tables_N.csv` will be created, where N may be 1, and depends on the total number of the successfully predicted rows. For all CLASSIFICATION [prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type]: Each .csv file will contain a header, listing all columns' [display_name-s][google.cloud.automl.v1beta1.ColumnSpec.display_name] given on input followed by M target column names in the format of "<[target_column_specs][google.cloud.automl.v1beta1.TablesModelMetadata.target_column_spec] [display_name][google.cloud.automl.v1beta1.ColumnSpec.display_name]>_<target value>_score" where M is the number of distinct target values, i.e. number of distinct values in the target column of the table used to train the model. Subsequent lines will contain the respective values of successfully predicted rows, with the last, i.e. the target, columns having the corresponding prediction [scores][google.cloud.automl.v1beta1.TablesAnnotation.score]. For REGRESSION and FORECASTING [prediction_type-s][google.cloud.automl.v1beta1.TablesModelMetadata.prediction_type]: Each .csv file will contain a header, listing all columns' [display_name-s][google.cloud.automl.v1beta1.display_name] given on input followed by the predicted target column with name in the format of "predicted_<[target_column_specs][google.cloud.automl.v1beta1.TablesModelMetadata.target_column_spec] [display_name][google.cloud.automl.v1beta1.ColumnSpec.display_name]>" Subsequent lines will contain the respective values of successfully predicted rows, with the last, i.e. the target, column having the predicted target value. If prediction for any rows failed, then an additional `errors_1.csv`, `errors_2.csv`,..., `errors_N.csv` will be created (N depends on total number of failed rows). These files will have analogous format as `tables_*.csv`, but always with a single target column having [`google.rpc.Status`](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) represented as a JSON string, and containing only `code` and `message`. BigQuery case: [bigquery_destination][google.cloud.automl.v1beta1.OutputConfig.bigquery_destination] pointing to a BigQuery project must be set. In the given project a new dataset will be created with name `prediction_<model-display-name>_<timestamp-of-prediction-call>` where <model-display-name> will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores), and timestamp will be in YYYY_MM_DDThh_mm_ss_sssZ "based on ISO-8601" format. In the dataset two tables will be created, `predictions`, and `errors`. The `predictions` table's column names will be the input columns' [display_name-s][google.cloud.automl.v1beta1.ColumnSpec.display_name] followed by the target column with name in the format of "predicted_<[target_column_specs][google.cloud.automl.v1beta1.TablesModelMetadata.target_column_spec] [display_name][google.cloud.automl.v1beta1.ColumnSpec.display_name]>" The input feature columns will contain the respective values of successfully predicted rows, with the target column having an ARRAY of [AnnotationPayloads][google.cloud.automl.v1beta1.AnnotationPayload], represented as STRUCT-s, containing [TablesAnnotation][google.cloud.automl.v1beta1.TablesAnnotation]. The `errors` table contains rows for which the prediction has failed, it has analogous input columns while the target column name is in the format of "errors_<[target_column_specs][google.cloud.automl.v1beta1.TablesModelMetadata.target_column_spec] [display_name][google.cloud.automl.v1beta1.ColumnSpec.display_name]>", and as a value has [`google.rpc.Status`](https: //github.com/googleapis/googleapis/blob/master/google/rpc/status.proto) represented as a STRUCT, and containing only `code` and `message`.

Used in: BatchPredictRequest

message BatchPredictResult

prediction_service.proto:244

Result of the Batch Predict. This message is returned in [response][google.longrunning.Operation.response] of the operation returned by the [PredictionService.BatchPredict][google.cloud.automl.v1beta1.PredictionService.BatchPredict].

message BigQueryDestination

io.proto:1112

The BigQuery location for the output content.

Used in: BatchPredictOutputConfig, ExportEvaluatedExamplesOutputConfig, OutputConfig

message BigQuerySource

io.proto:1093

The BigQuery location for the input content.

Used in: BatchPredictInputConfig, InputConfig

message BoundingBoxMetricsEntry

detection.proto:69

Bounding box matching model metrics for a single intersection-over-union threshold and multiple label match confidence thresholds.

Used in: ImageObjectDetectionEvaluationMetrics, VideoObjectTrackingEvaluationMetrics

message BoundingBoxMetricsEntry.ConfidenceMetricsEntry

detection.proto:71

Metrics for a single confidence threshold.

Used in: BoundingBoxMetricsEntry

message BoundingPoly

geometry.proto:44

A bounding polygon of a detected object on a plane. On output both vertices and normalized_vertices are provided. The polygon is formed by connecting vertices in the order they are listed.

Used in: Document.Layout, ImageObjectDetectionAnnotation, VideoObjectTrackingAnnotation

message CategoryStats

data_stats.proto:145

The data statistics of a series of CATEGORY values.

Used in: DataStats

message CategoryStats.SingleCategoryStats

data_stats.proto:147

The statistics of a single CATEGORY value.

Used in: CategoryStats

message ClassificationAnnotation

classification.proto:42

Contains annotation details specific to classification.

Used in: AnnotationPayload, VideoClassificationAnnotation

message ClassificationEvaluationMetrics

classification.proto:89

Model evaluation metrics for classification problems. Note: For Video Classification this metrics only describe quality of the Video Classification predictions of "segment_classification" type.

Used in: ModelEvaluation

message ClassificationEvaluationMetrics.ConfidenceMetricsEntry

classification.proto:91

Metrics for a single confidence threshold.

Used in: ClassificationEvaluationMetrics

message ClassificationEvaluationMetrics.ConfusionMatrix

classification.proto:153

Confusion matrix of the model running the classification.

Used in: ClassificationEvaluationMetrics, TextSentimentEvaluationMetrics

message ClassificationEvaluationMetrics.ConfusionMatrix.Row

classification.proto:155

Output only. A row in the confusion matrix.

Used in: ConfusionMatrix

enum ClassificationType

classification.proto:30

Type of the classification problem.

Used in: ImageClassificationDatasetMetadata, TextClassificationDatasetMetadata, TextClassificationModelMetadata

message ColumnSpec

column_spec.proto:34

A representation of a column in a relational table. When listing them, column specs are returned in the same order in which they were given on import . Used by: * Tables

Used as response type in: AutoMl.GetColumnSpec, AutoMl.UpdateColumnSpec

Used as field type in: ListColumnSpecsResponse, TablesModelMetadata, UpdateColumnSpecRequest

message ColumnSpec.CorrelatedColumn

column_spec.proto:37

Identifies the table's column, and its correlation with the column this ColumnSpec describes.

Used in: ColumnSpec

message CorrelationStats

data_stats.proto:164

A correlation statistics between two series of DataType values. The series may have differing DataType-s, but within a single series the DataType must be the same.

Used in: ColumnSpec.CorrelatedColumn, TablesDatasetMetadata

message CreateModelOperationMetadata

operations.proto:95

Details of CreateModel operation.

Used in: OperationMetadata

(message has no fields)

message DataStats

data_stats.proto:29

The data statistics of a series of values that share the same DataType.

Used in: ArrayStats, ColumnSpec, StructStats

message DataType

data_types.proto:67

Indicated the type of data that can be stored in a structured data entity (e.g. a table).

Used in: ColumnSpec, StructType

message Dataset

dataset.proto:36

A workspace for solving a single, particular machine learning (ML) problem. A workspace contains examples that may be annotated.

Used as response type in: AutoMl.CreateDataset, AutoMl.GetDataset, AutoMl.UpdateDataset

Used as field type in: CreateDatasetRequest, ListDatasetsResponse, UpdateDatasetRequest

message DeleteOperationMetadata

operations.proto:86

Details of operations that perform deletes of any entities.

Used in: OperationMetadata

(message has no fields)

message DeployModelOperationMetadata

operations.proto:89

Details of DeployModel operation.

Used in: OperationMetadata

(message has no fields)

message Document

data_items.proto:102

A structured text document e.g. a PDF.

Used in: ExamplePayload

message Document.Layout

data_items.proto:106

Describes the layout information of a [text_segment][google.cloud.automl.v1beta1.Document.Layout.text_segment] in the document.

Used in: Document

enum Document.Layout.TextSegmentType

data_items.proto:108

The type of TextSegment in the context of the original document.

Used in: Layout

message DocumentDimensions

data_items.proto:75

Message that describes dimension of a document.

Used in: Document

enum DocumentDimensions.DocumentDimensionUnit

data_items.proto:77

Unit of the document dimension.

Used in: DocumentDimensions

message DocumentInputConfig

io.proto:634

Input configuration of a [Document][google.cloud.automl.v1beta1.Document].

Used in: Document

message DoubleRange

ranges.proto:30

A range between two double numbers.

Used in: TablesAnnotation

message ExamplePayload

data_items.proto:216

Example data used for training or prediction.

Used in: PredictRequest, PredictResponse

message ExportDataOperationMetadata

operations.proto:101

Details of ExportData operation.

Used in: OperationMetadata

message ExportDataOperationMetadata.ExportDataOutputInfo

operations.proto:105

Further describes this export data's output. Supplements [OutputConfig][google.cloud.automl.v1beta1.OutputConfig].

Used in: ExportDataOperationMetadata

message ExportEvaluatedExamplesOperationMetadata

operations.proto:167

Details of EvaluatedExamples operation.

Used in: OperationMetadata

message ExportEvaluatedExamplesOperationMetadata.ExportEvaluatedExamplesOutputInfo

operations.proto:172

Further describes the output of the evaluated examples export. Supplements [ExportEvaluatedExamplesOutputConfig][google.cloud.automl.v1beta1.ExportEvaluatedExamplesOutputConfig].

Used in: ExportEvaluatedExamplesOperationMetadata

message ExportEvaluatedExamplesOutputConfig

io.proto:1076

Output configuration for ExportEvaluatedExamples Action. Note that this call is available only for 30 days since the moment the model was evaluated. The output depends on the domain, as follows (note that only examples from the TEST set are exported): * For Tables: [bigquery_destination][google.cloud.automl.v1beta1.OutputConfig.bigquery_destination] pointing to a BigQuery project must be set. In the given project a new dataset will be created with name `export_evaluated_examples_<model-display-name>_<timestamp-of-export-call>` where <model-display-name> will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores), and timestamp will be in YYYY_MM_DDThh_mm_ss_sssZ "based on ISO-8601" format. In the dataset an `evaluated_examples` table will be created. It will have all the same columns as the [primary_table][google.cloud.automl.v1beta1.TablesDatasetMetadata.primary_table_spec_id] of the [dataset][google.cloud.automl.v1beta1.Model.dataset_id] from which the model was created, as they were at the moment of model's evaluation (this includes the target column with its ground truth), followed by a column called "predicted_<target_column>". That last column will contain the model's prediction result for each respective row, given as ARRAY of [AnnotationPayloads][google.cloud.automl.v1beta1.AnnotationPayload], represented as STRUCT-s, containing [TablesAnnotation][google.cloud.automl.v1beta1.TablesAnnotation].

Used in: ExportEvaluatedExamplesRequest

message ExportModelOperationMetadata

operations.proto:150

Details of ExportModel operation.

Used in: OperationMetadata

message ExportModelOperationMetadata.ExportModelOutputInfo

operations.proto:155

Further describes the output of model export. Supplements [ModelExportOutputConfig][google.cloud.automl.v1beta1.ModelExportOutputConfig].

Used in: ExportModelOperationMetadata

message Float64Stats

data_stats.proto:62

The data statistics of a series of FLOAT64 values.

Used in: DataStats

message Float64Stats.HistogramBucket

data_stats.proto:64

A bucket of a histogram.

Used in: Float64Stats

message GcrDestination

io.proto:1120

The GCR location where the image must be pushed to.

Used in: ModelExportOutputConfig

message GcsDestination

io.proto:1101

The Google Cloud Storage location where the output is to be written to.

Used in: BatchPredictOutputConfig, ModelExportOutputConfig, OutputConfig

message GcsSource

io.proto:1085

The Google Cloud Storage location for the input content.

Used in: BatchPredictInputConfig, DocumentInputConfig, InputConfig

message Image

data_items.proto:36

A representation of an image. Only images up to 30MB in size are supported.

Used in: ExamplePayload

message ImageClassificationDatasetMetadata

image.proto:33

Dataset metadata that is specific to image classification.

Used in: Dataset

message ImageClassificationModelDeploymentMetadata

image.proto:161

Model deployment metadata specific to Image Classification.

Used in: DeployModelRequest

message ImageClassificationModelMetadata

image.proto:42

Model metadata for image classification.

Used in: Model

message ImageObjectDetectionAnnotation

detection.proto:31

Annotation details for image object detection.

Used in: AnnotationPayload

message ImageObjectDetectionDatasetMetadata

image.proto:39

Dataset metadata specific to image object detection.

Used in: Dataset

(message has no fields)

message ImageObjectDetectionEvaluationMetrics

detection.proto:100

Model evaluation metrics for image object detection problems. Evaluates prediction quality of labeled bounding boxes.

Used in: ModelEvaluation

message ImageObjectDetectionModelDeploymentMetadata

image.proto:172

Model deployment metadata specific to Image Object Detection.

Used in: DeployModelRequest

message ImageObjectDetectionModelMetadata

image.proto:113

Model metadata specific to image object detection.

Used in: Model

message ImportDataOperationMetadata

operations.proto:98

Details of ImportData operation.

Used in: OperationMetadata

(message has no fields)

message InputConfig

io.proto:386

Input configuration for ImportData Action. The format of input depends on dataset_metadata the Dataset into which the import is happening has. As input source the [gcs_source][google.cloud.automl.v1beta1.InputConfig.gcs_source] is expected, unless specified otherwise. Additionally any input .CSV file by itself must be 100MB or smaller, unless specified otherwise. If an "example" file (that is, image, video etc.) with identical content (even if it had different GCS_FILE_PATH) is mentioned multiple times, then its label, bounding boxes etc. are appended. The same file should be always provided with the same ML_USE and GCS_FILE_PATH, if it is not, then these values are nondeterministically selected from the given ones. The formats are represented in EBNF with commas being literal and with non-terminal symbols defined near the end of this comment. The formats are: * For Image Classification: CSV file(s) with each line in format: ML_USE,GCS_FILE_PATH,LABEL,LABEL,... GCS_FILE_PATH leads to image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG, .WEBP, .BMP, .TIFF, .ICO For MULTICLASS classification type, at most one LABEL is allowed per image. If an image has not yet been labeled, then it should be mentioned just once with no LABEL. Some sample rows: TRAIN,gs://folder/image1.jpg,daisy TEST,gs://folder/image2.jpg,dandelion,tulip,rose UNASSIGNED,gs://folder/image3.jpg,daisy UNASSIGNED,gs://folder/image4.jpg * For Image Object Detection: CSV file(s) with each line in format: ML_USE,GCS_FILE_PATH,(LABEL,BOUNDING_BOX | ,,,,,,,) GCS_FILE_PATH leads to image of up to 30MB in size. Supported extensions: .JPEG, .GIF, .PNG. Each image is assumed to be exhaustively labeled. The minimum allowed BOUNDING_BOX edge length is 0.01, and no more than 500 BOUNDING_BOX-es per image are allowed (one BOUNDING_BOX is defined per line). If an image has not yet been labeled, then it should be mentioned just once with no LABEL and the ",,,,,,," in place of the BOUNDING_BOX. For images which are known to not contain any bounding boxes, they should be labelled explictly as "NEGATIVE_IMAGE", followed by ",,,,,,," in place of the BOUNDING_BOX. Sample rows: TRAIN,gs://folder/image1.png,car,0.1,0.1,,,0.3,0.3,, TRAIN,gs://folder/image1.png,bike,.7,.6,,,.8,.9,, UNASSIGNED,gs://folder/im2.png,car,0.1,0.1,0.2,0.1,0.2,0.3,0.1,0.3 TEST,gs://folder/im3.png,,,,,,,,, TRAIN,gs://folder/im4.png,NEGATIVE_IMAGE,,,,,,,,, * For Video Classification: CSV file(s) with each line in format: ML_USE,GCS_FILE_PATH where ML_USE VALIDATE value should not be used. The GCS_FILE_PATH should lead to another .csv file which describes examples that have given ML_USE, using the following row format: GCS_FILE_PATH,(LABEL,TIME_SEGMENT_START,TIME_SEGMENT_END | ,,) Here GCS_FILE_PATH leads to a video of up to 50GB in size and up to 3h duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI. TIME_SEGMENT_START and TIME_SEGMENT_END must be within the length of the video, and end has to be after the start. Any segment of a video which has one or more labels on it, is considered a hard negative for all other labels. Any segment with no labels on it is considered to be unknown. If a whole video is unknown, then it shuold be mentioned just once with ",," in place of LABEL, TIME_SEGMENT_START,TIME_SEGMENT_END. Sample top level CSV file: TRAIN,gs://folder/train_videos.csv TEST,gs://folder/test_videos.csv UNASSIGNED,gs://folder/other_videos.csv Sample rows of a CSV file for a particular ML_USE: gs://folder/video1.avi,car,120,180.000021 gs://folder/video1.avi,bike,150,180.000021 gs://folder/vid2.avi,car,0,60.5 gs://folder/vid3.avi,,, * For Video Object Tracking: CSV file(s) with each line in format: ML_USE,GCS_FILE_PATH where ML_USE VALIDATE value should not be used. The GCS_FILE_PATH should lead to another .csv file which describes examples that have given ML_USE, using one of the following row format: GCS_FILE_PATH,LABEL,[INSTANCE_ID],TIMESTAMP,BOUNDING_BOX or GCS_FILE_PATH,,,,,,,,,, Here GCS_FILE_PATH leads to a video of up to 50GB in size and up to 3h duration. Supported extensions: .MOV, .MPEG4, .MP4, .AVI. Providing INSTANCE_IDs can help to obtain a better model. When a specific labeled entity leaves the video frame, and shows up afterwards it is not required, albeit preferable, that the same INSTANCE_ID is given to it. TIMESTAMP must be within the length of the video, the BOUNDING_BOX is assumed to be drawn on the closest video's frame to the TIMESTAMP. Any mentioned by the TIMESTAMP frame is expected to be exhaustively labeled and no more than 500 BOUNDING_BOX-es per frame are allowed. If a whole video is unknown, then it should be mentioned just once with ",,,,,,,,,," in place of LABEL, [INSTANCE_ID],TIMESTAMP,BOUNDING_BOX. Sample top level CSV file: TRAIN,gs://folder/train_videos.csv TEST,gs://folder/test_videos.csv UNASSIGNED,gs://folder/other_videos.csv Seven sample rows of a CSV file for a particular ML_USE: gs://folder/video1.avi,car,1,12.10,0.8,0.8,0.9,0.8,0.9,0.9,0.8,0.9 gs://folder/video1.avi,car,1,12.90,0.4,0.8,0.5,0.8,0.5,0.9,0.4,0.9 gs://folder/video1.avi,car,2,12.10,.4,.2,.5,.2,.5,.3,.4,.3 gs://folder/video1.avi,car,2,12.90,.8,.2,,,.9,.3,, gs://folder/video1.avi,bike,,12.50,.45,.45,,,.55,.55,, gs://folder/video2.avi,car,1,0,.1,.9,,,.9,.1,, gs://folder/video2.avi,,,,,,,,,,, * For Text Extraction: CSV file(s) with each line in format: ML_USE,GCS_FILE_PATH GCS_FILE_PATH leads to a .JSONL (that is, JSON Lines) file which either imports text in-line or as documents. Any given .JSONL file must be 100MB or smaller. The in-line .JSONL file contains, per line, a proto that wraps a TextSnippet proto (in json representation) followed by one or more AnnotationPayload protos (called annotations), which have display_name and text_extraction detail populated. The given text is expected to be annotated exhaustively, for example, if you look for animals and text contains "dolphin" that is not labeled, then "dolphin" is assumed to not be an animal. Any given text snippet content must be 10KB or smaller, and also be UTF-8 NFC encoded (ASCII already is). The document .JSONL file contains, per line, a proto that wraps a Document proto. The Document proto must have either document_text or input_config set. In document_text case, the Document proto may also contain the spatial information of the document, including layout, document dimension and page number. In input_config case, only PDF documents are supported now, and each document may be up to 2MB large. Currently, annotations on documents cannot be specified at import. Three sample CSV rows: TRAIN,gs://folder/file1.jsonl VALIDATE,gs://folder/file2.jsonl TEST,gs://folder/file3.jsonl Sample in-line JSON Lines file for entity extraction (presented here with artificial line breaks, but the only actual line break is denoted by \n).: { "document": { "document_text": {"content": "dog cat"} "layout": [ { "text_segment": { "start_offset": 0, "end_offset": 3, }, "page_number": 1, "bounding_poly": { "normalized_vertices": [ {"x": 0.1, "y": 0.1}, {"x": 0.1, "y": 0.3}, {"x": 0.3, "y": 0.3}, {"x": 0.3, "y": 0.1}, ], }, "text_segment_type": TOKEN, }, { "text_segment": { "start_offset": 4, "end_offset": 7, }, "page_number": 1, "bounding_poly": { "normalized_vertices": [ {"x": 0.4, "y": 0.1}, {"x": 0.4, "y": 0.3}, {"x": 0.8, "y": 0.3}, {"x": 0.8, "y": 0.1}, ], }, "text_segment_type": TOKEN, } ], "document_dimensions": { "width": 8.27, "height": 11.69, "unit": INCH, } "page_count": 1, }, "annotations": [ { "display_name": "animal", "text_extraction": {"text_segment": {"start_offset": 0, "end_offset": 3}} }, { "display_name": "animal", "text_extraction": {"text_segment": {"start_offset": 4, "end_offset": 7}} } ], }\n { "text_snippet": { "content": "This dog is good." }, "annotations": [ { "display_name": "animal", "text_extraction": { "text_segment": {"start_offset": 5, "end_offset": 8} } } ] } Sample document JSON Lines file (presented here with artificial line breaks, but the only actual line break is denoted by \n).: { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document1.pdf" ] } } } }\n { "document": { "input_config": { "gcs_source": { "input_uris": [ "gs://folder/document2.pdf" ] } } } } * For Text Classification: CSV file(s) with each line in format: ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),LABEL,LABEL,... TEXT_SNIPPET and GCS_FILE_PATH are distinguished by a pattern. If the column content is a valid gcs file path, i.e. prefixed by "gs://", it will be treated as a GCS_FILE_PATH, else if the content is enclosed within double quotes (""), it is treated as a TEXT_SNIPPET. In the GCS_FILE_PATH case, the path must lead to a .txt file with UTF-8 encoding, for example, "gs://folder/content.txt", and the content in it is extracted as a text snippet. In TEXT_SNIPPET case, the column content excluding quotes is treated as to be imported text snippet. In both cases, the text snippet/file size must be within 128kB. Maximum 100 unique labels are allowed per CSV row. Sample rows: TRAIN,"They have bad food and very rude",RudeService,BadFood TRAIN,gs://folder/content.txt,SlowService TEST,"Typically always bad service there.",RudeService VALIDATE,"Stomach ache to go.",BadFood * For Text Sentiment: CSV file(s) with each line in format: ML_USE,(TEXT_SNIPPET | GCS_FILE_PATH),SENTIMENT TEXT_SNIPPET and GCS_FILE_PATH are distinguished by a pattern. If the column content is a valid gcs file path, that is, prefixed by "gs://", it is treated as a GCS_FILE_PATH, otherwise it is treated as a TEXT_SNIPPET. In the GCS_FILE_PATH case, the path must lead to a .txt file with UTF-8 encoding, for example, "gs://folder/content.txt", and the content in it is extracted as a text snippet. In TEXT_SNIPPET case, the column content itself is treated as to be imported text snippet. In both cases, the text snippet must be up to 500 characters long. Sample rows: TRAIN,"@freewrytin this is way too good for your product",2 TRAIN,"I need this product so bad",3 TEST,"Thank you for this product.",4 VALIDATE,gs://folder/content.txt,2 * For Tables: Either [gcs_source][google.cloud.automl.v1beta1.InputConfig.gcs_source] or [bigquery_source][google.cloud.automl.v1beta1.InputConfig.bigquery_source] can be used. All inputs is concatenated into a single [primary_table][google.cloud.automl.v1beta1.TablesDatasetMetadata.primary_table_name] For gcs_source: CSV file(s), where the first row of the first file is the header, containing unique column names. If the first row of a subsequent file is the same as the header, then it is also treated as a header. All other rows contain values for the corresponding columns. Each .CSV file by itself must be 10GB or smaller, and their total size must be 100GB or smaller. First three sample rows of a CSV file: "Id","First Name","Last Name","Dob","Addresses" "1","John","Doe","1968-01-22","[{"status":"current","address":"123_First_Avenue","city":"Seattle","state":"WA","zip":"11111","numberOfYears":"1"},{"status":"previous","address":"456_Main_Street","city":"Portland","state":"OR","zip":"22222","numberOfYears":"5"}]" "2","Jane","Doe","1980-10-16","[{"status":"current","address":"789_Any_Avenue","city":"Albany","state":"NY","zip":"33333","numberOfYears":"2"},{"status":"previous","address":"321_Main_Street","city":"Hoboken","state":"NJ","zip":"44444","numberOfYears":"3"}]} For bigquery_source: An URI of a BigQuery table. The user data size of the BigQuery table must be 100GB or smaller. An imported table must have between 2 and 1,000 columns, inclusive, and between 1000 and 100,000,000 rows, inclusive. There are at most 5 import data running in parallel. Definitions: ML_USE = "TRAIN" | "VALIDATE" | "TEST" | "UNASSIGNED" Describes how the given example (file) should be used for model training. "UNASSIGNED" can be used when user has no preference. GCS_FILE_PATH = A path to file on GCS, e.g. "gs://folder/image1.png". LABEL = A display name of an object on an image, video etc., e.g. "dog". Must be up to 32 characters long and can consist only of ASCII Latin letters A-Z and a-z, underscores(_), and ASCII digits 0-9. For each label an AnnotationSpec is created which display_name becomes the label; AnnotationSpecs are given back in predictions. INSTANCE_ID = A positive integer that identifies a specific instance of a labeled entity on an example. Used e.g. to track two cars on a video while being able to tell apart which one is which. BOUNDING_BOX = VERTEX,VERTEX,VERTEX,VERTEX | VERTEX,,,VERTEX,, A rectangle parallel to the frame of the example (image, video). If 4 vertices are given they are connected by edges in the order provided, if 2 are given they are recognized as diagonally opposite vertices of the rectangle. VERTEX = COORDINATE,COORDINATE First coordinate is horizontal (x), the second is vertical (y). COORDINATE = A float in 0 to 1 range, relative to total length of image or video in given dimension. For fractions the leading non-decimal 0 can be omitted (i.e. 0.3 = .3). Point 0,0 is in top left. TIME_SEGMENT_START = TIME_OFFSET Expresses a beginning, inclusive, of a time segment within an example that has a time dimension (e.g. video). TIME_SEGMENT_END = TIME_OFFSET Expresses an end, exclusive, of a time segment within an example that has a time dimension (e.g. video). TIME_OFFSET = A number of seconds as measured from the start of an example (e.g. video). Fractions are allowed, up to a microsecond precision. "inf" is allowed, and it means the end of the example. TEXT_SNIPPET = A content of a text snippet, UTF-8 encoded, enclosed within double quotes (""). SENTIMENT = An integer between 0 and Dataset.text_sentiment_dataset_metadata.sentiment_max (inclusive). Describes the ordinal of the sentiment - higher value means a more positive sentiment. All the values are completely relative, i.e. neither 0 needs to mean a negative or neutral sentiment nor sentiment_max needs to mean a positive one - it is just required that 0 is the least positive sentiment in the data, and sentiment_max is the most positive one. The SENTIMENT shouldn't be confused with "score" or "magnitude" from the previous Natural Language Sentiment Analysis API. All SENTIMENT values between 0 and sentiment_max must be represented in the imported data. On prediction the same 0 to sentiment_max range will be used. The difference between neighboring sentiment values needs not to be uniform, e.g. 1 and 2 may be similar whereas the difference between 2 and 3 may be huge. Errors: If any of the provided CSV files can't be parsed or if more than certain percent of CSV rows cannot be processed then the operation fails and nothing is imported. Regardless of overall success or failure the per-row failures, up to a certain count cap, is listed in Operation.metadata.partial_failures.

Used in: Image, ImportDataRequest, TableSpec

message Model

model.proto:35

API proto representing a trained machine learning model.

Used as response type in: AutoMl.GetModel

Used as field type in: CreateModelRequest, ListModelsResponse

enum Model.DeploymentState

model.proto:37

Deployment state of the model.

Used in: Model

message ModelEvaluation

model_evaluation.proto:37

Evaluation results of a model.

Used as response type in: AutoMl.GetModelEvaluation

Used as field type in: ListModelEvaluationsResponse

message ModelExportOutputConfig

io.proto:984

Output configuration for ModelExport Action.

Used in: ExportModelRequest

message NormalizedVertex

geometry.proto:33

A vertex represents a 2D point in the image. The normalized vertex coordinates are between 0 to 1 fractions relative to the original plane (image, video). E.g. if the plane (e.g. whole image) would have size 10 x 20 then a point with normalized coordinates (0.1, 0.3) would be at the position (1, 6) on that plane.

Used in: BoundingPoly

message OperationMetadata

operations.proto:35

Metadata used across all long running operations returned by AutoML API.

message OutputConfig

io.proto:673

* For Translation: CSV file `translation.csv`, with each line in format: ML_USE,GCS_FILE_PATH GCS_FILE_PATH leads to a .TSV file which describes examples that have given ML_USE, using the following row format per line: TEXT_SNIPPET (in source language) \t TEXT_SNIPPET (in target language) * For Tables: Output depends on whether the dataset was imported from GCS or BigQuery. GCS case: [gcs_destination][google.cloud.automl.v1beta1.OutputConfig.gcs_destination] must be set. Exported are CSV file(s) `tables_1.csv`, `tables_2.csv`,...,`tables_N.csv` with each having as header line the table's column names, and all other lines contain values for the header columns. BigQuery case: [bigquery_destination][google.cloud.automl.v1beta1.OutputConfig.bigquery_destination] pointing to a BigQuery project must be set. In the given project a new dataset will be created with name `export_data_<automl-dataset-display-name>_<timestamp-of-export-call>` where <automl-dataset-display-name> will be made BigQuery-dataset-name compatible (e.g. most special characters will become underscores), and timestamp will be in YYYY_MM_DDThh_mm_ss_sssZ "based on ISO-8601" format. In that dataset a new table called `primary_table` will be created, and filled with precisely the same data as this obtained on import.

Used in: ExportDataRequest

message RegressionEvaluationMetrics

regression.proto:29

Metrics for regression problems.

Used in: ModelEvaluation

message Row

data_items.proto:195

A representation of a row in a relational table.

Used in: ExamplePayload

message StringStats

data_stats.proto:98

The data statistics of a series of STRING values.

Used in: DataStats

message StringStats.UnigramStats

data_stats.proto:100

The statistics of a unigram.

Used in: StringStats

message StructStats

data_stats.proto:138

The data statistics of a series of STRUCT values.

Used in: DataStats

message StructType

data_types.proto:101

`StructType` defines the DataType-s of a [STRUCT][google.cloud.automl.v1beta1.TypeCode.STRUCT] type.

Used in: DataType

message TableSpec

table_spec.proto:38

A specification of a relational table. The table's schema is represented via its child column specs. It is pre-populated as part of ImportData by schema inference algorithm, the version of which is a required parameter of ImportData InputConfig. Note: While working with a table, at times the schema may be inconsistent with the data in the table (e.g. string in a FLOAT64 column). The consistency validation is done upon creation of a model. Used by: * Tables

Used as response type in: AutoMl.GetTableSpec, AutoMl.UpdateTableSpec

Used as field type in: ListTableSpecsResponse, UpdateTableSpecRequest

message TablesAnnotation

tables.proto:213

Contains annotation details specific to Tables.

Used in: AnnotationPayload

message TablesDatasetMetadata

tables.proto:37

Metadata for a dataset used for AutoML Tables.

Used in: Dataset

message TablesModelColumnInfo

tables.proto:256

An information specific to given column and Tables Model, in context of the Model and the predictions created by it.

Used in: TablesAnnotation, TablesModelMetadata

message TablesModelMetadata

tables.proto:105

Model metadata specific to AutoML Tables.

Used in: Model

message TextClassificationDatasetMetadata

text.proto:31

Dataset metadata for classification.

Used in: Dataset

message TextClassificationModelMetadata

text.proto:37

Model metadata that is specific to text classification.

Used in: Model

message TextExtractionAnnotation

text_extraction.proto:30

Annotation for identifying spans of text.

Used in: AnnotationPayload

message TextExtractionDatasetMetadata

text.proto:43

Dataset metadata that is specific to text extraction

Used in: Dataset

(message has no fields)

message TextExtractionEvaluationMetrics

text_extraction.proto:45

Model evaluation metrics for text extraction problems.

Used in: ModelEvaluation

message TextExtractionEvaluationMetrics.ConfidenceMetricsEntry

text_extraction.proto:47

Metrics for a single confidence threshold.

Used in: TextExtractionEvaluationMetrics

message TextExtractionModelMetadata

text.proto:46

Model metadata that is specific to text extraction.

Used in: Model

(message has no fields)

message TextSegment

text_segment.proto:30

A contiguous part of a text (string), assuming it has an UTF-8 NFC encoding.

Used in: Document.Layout, TextExtractionAnnotation

message TextSentimentAnnotation

text_sentiment.proto:30

Contains annotation details specific to text sentiment.

Used in: AnnotationPayload

message TextSentimentDatasetMetadata

text.proto:49

Dataset metadata for text sentiment.

Used in: Dataset

message TextSentimentEvaluationMetrics

text_sentiment.proto:47

Model evaluation metrics for text sentiment problems.

Used in: ModelEvaluation

message TextSentimentModelMetadata

text.proto:59

Model metadata that is specific to text sentiment.

Used in: Model

(message has no fields)

message TextSnippet

data_items.proto:58

A representation of a text snippet.

Used in: Document, ExamplePayload, TranslationAnnotation

message TimeSegment

temporal.proto:30

A time period inside of an example that has a time dimension (e.g. video).

Used in: VideoClassificationAnnotation

message TimestampStats

data_stats.proto:114

The data statistics of a series of TIMESTAMP values.

Used in: DataStats

message TimestampStats.GranularStats

data_stats.proto:116

Stats split by a defined in context granularity.

Used in: TimestampStats

message TranslationAnnotation

translation.proto:66

Annotation details specific to translation.

Used in: AnnotationPayload

message TranslationDatasetMetadata

translation.proto:31

Dataset metadata that is specific to translation.

Used in: Dataset

message TranslationEvaluationMetrics

translation.proto:40

Evaluation metrics for the dataset.

Used in: ModelEvaluation

message TranslationModelMetadata

translation.proto:49

Model metadata that is specific to translation.

Used in: Model

enum TypeCode

data_types.proto:30

`TypeCode` is used as a part of [DataType][google.cloud.automl.v1beta1.DataType].

Used in: DataType

message UndeployModelOperationMetadata

operations.proto:92

Details of UndeployModel operation.

Used in: OperationMetadata

(message has no fields)

message VideoClassificationAnnotation

classification.proto:52

Contains annotation details specific to video classification.

Used in: AnnotationPayload

message VideoClassificationDatasetMetadata

video.proto:32

Dataset metadata specific to video classification. All Video Classification datasets are treated as multi label.

Used in: Dataset

(message has no fields)

message VideoClassificationModelMetadata

video.proto:42

Model metadata specific to video classification.

Used in: Model

(message has no fields)

message VideoObjectTrackingAnnotation

detection.proto:41

Annotation details for video object tracking.

Used in: AnnotationPayload

message VideoObjectTrackingDatasetMetadata

video.proto:37

Dataset metadata specific to video object tracking.

Used in: Dataset

(message has no fields)

message VideoObjectTrackingEvaluationMetrics

detection.proto:119

Model evaluation metrics for video object tracking problems. Evaluates prediction quality of both labeled bounding boxes and labeled tracks (i.e. series of bounding boxes sharing same label and instance ID).

Used in: ModelEvaluation

message VideoObjectTrackingModelMetadata

video.proto:47

Model metadata specific to video object tracking.

Used in: Model

(message has no fields)