Get desktop application:
View/edit binary Protocol Buffers messages
Creates dataset. If success return a Dataset resource.
Request message for CreateDataset.
Required. Dataset resource parent, format: projects/{project_id}
Required. The dataset to be created.
Gets dataset by resource name.
Request message for GetDataSet.
Required. Dataset resource name, format: projects/{project_id}/datasets/{dataset_id}
Lists datasets under a project. Pagination is supported.
Request message for ListDataset.
Required. Dataset resource parent, format: projects/{project_id}
Optional. Filter on dataset is not supported at this moment.
Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListDatasetsResponse.next_page_token][google.cloud.datalabeling.v1beta1.ListDatasetsResponse.next_page_token] of the previous [DataLabelingService.ListDatasets] call. Returns the first page if empty.
Results of listing datasets within a project.
The list of datasets to return.
A token to retrieve next page of results.
Deletes a dataset by resource name.
Request message for DeleteDataset.
Required. Dataset resource name, format: projects/{project_id}/datasets/{dataset_id}
Imports data into dataset based on source locations defined in request. It can be called multiple times for the same dataset. Each dataset can only have one long running operation running on it. For example, no labeling task (also long running operation) can be started while importing is still ongoing. Vice versa.
Request message for ImportData API.
Required. Dataset resource name, format: projects/{project_id}/datasets/{dataset_id}
Required. Specify the input source of the data.
Email of the user who started the import task and should be notified by email. If empty no notification will be sent.
Exports data and annotations from dataset.
Request message for ExportData API.
Required. Dataset resource name, format: projects/{project_id}/datasets/{dataset_id}
Required. Annotated dataset resource name. DataItem in Dataset and their annotations in specified annotated dataset will be exported. It's in format of projects/{project_id}/datasets/{dataset_id}/annotatedDatasets/ {annotated_dataset_id}
Optional. Filter is not supported at this moment.
Required. Specify the output destination.
Email of the user who started the export task and should be notified by email. If empty no notification will be sent.
Gets a data item in a dataset by resource name. This API can be called after data are imported into dataset.
Request message for GetDataItem.
Required. The name of the data item to get, format: projects/{project_id}/datasets/{dataset_id}/dataItems/{data_item_id}
Lists data items in a dataset. This API can be called after data are imported into dataset. Pagination is supported.
Request message for ListDataItems.
Required. Name of the dataset to list data items, format: projects/{project_id}/datasets/{dataset_id}
Optional. Filter is not supported at this moment.
Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListDataItemsResponse.next_page_token][google.cloud.datalabeling.v1beta1.ListDataItemsResponse.next_page_token] of the previous [DataLabelingService.ListDataItems] call. Return first page if empty.
Results of listing data items in a dataset.
The list of data items to return.
A token to retrieve next page of results.
Gets an annotated dataset by resource name.
Request message for GetAnnotatedDataset.
Required. Name of the annotated dataset to get, format: projects/{project_id}/datasets/{dataset_id}/annotatedDatasets/ {annotated_dataset_id}
Lists annotated datasets for a dataset. Pagination is supported.
Request message for ListAnnotatedDatasets.
Required. Name of the dataset to list annotated datasets, format: projects/{project_id}/datasets/{dataset_id}
Optional. Filter is not supported at this moment.
Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListAnnotatedDatasetsResponse.next_page_token][google.cloud.datalabeling.v1beta1.ListAnnotatedDatasetsResponse.next_page_token] of the previous [DataLabelingService.ListAnnotatedDatasets] call. Return first page if empty.
Results of listing annotated datasets for a dataset.
The list of annotated datasets to return.
A token to retrieve next page of results.
Deletes an annotated dataset by resource name.
Request message for DeleteAnnotatedDataset.
Required. Name of the annotated dataset to delete, format: projects/{project_id}/datasets/{dataset_id}/annotatedDatasets/ {annotated_dataset_id}
Starts a labeling task for image. The type of image labeling task is configured by feature in the request.
Request message for starting an image labeling task.
Required. Config for labeling tasks. The type of request config must match the selected feature.
Configuration for image classification task. One of image_classification_config, bounding_poly_config, polyline_config and segmentation_config are required.
Configuration for bounding box and bounding poly task. One of image_classification_config, bounding_poly_config, polyline_config and segmentation_config are required.
Configuration for polyline task. One of image_classification_config, bounding_poly_config, polyline_config and segmentation_config are required.
Configuration for segmentation task. One of image_classification_config, bounding_poly_config, polyline_config and segmentation_config are required.
Required. Name of the dataset to request labeling task, format: projects/{project_id}/datasets/{dataset_id}
Required. Basic human annotation config.
Required. The type of image labeling task.
Starts a labeling task for video. The type of video labeling task is configured by feature in the request.
Request message for LabelVideo.
Required. Config for labeling tasks. The type of request config must match the selected feature.
Configuration for video classification task. One of video_classification_config, object_detection_config, object_tracking_config and event_config is required.
Configuration for video object detection task. One of video_classification_config, object_detection_config, object_tracking_config and event_config is required.
Configuration for video object tracking task. One of video_classification_config, object_detection_config, object_tracking_config and event_config is required.
Configuration for video event task. One of video_classification_config, object_detection_config, object_tracking_config and event_config is required.
Required. Name of the dataset to request labeling task, format: projects/{project_id}/datasets/{dataset_id}
Required. Basic human annotation config.
Required. The type of video labeling task.
Starts a labeling task for text. The type of text labeling task is configured by feature in the request.
Request message for LabelText.
Required. Config for labeling tasks. The type of request config must match the selected feature.
Configuration for text classification task. One of text_classification_config and text_entity_extraction_config is required.
Configuration for entity extraction task. One of text_classification_config and text_entity_extraction_config is required.
Required. Name of the data set to request labeling task, format: projects/{project_id}/datasets/{dataset_id}
Required. Basic human annotation config.
Required. The type of text labeling task.
Gets an example by resource name, including both data and annotation.
Request message for GetExample
Required. Name of example, format: projects/{project_id}/datasets/{dataset_id}/annotatedDatasets/ {annotated_dataset_id}/examples/{example_id}
Optional. An expression for filtering Examples. Filter by annotation_spec.display_name is supported. Format "annotation_spec.display_name = {display_name}"
Lists examples in an annotated dataset. Pagination is supported.
Request message for ListExamples.
Required. Example resource parent.
Optional. An expression for filtering Examples. For annotated datasets that have annotation spec set, filter by annotation_spec.display_name is supported. Format "annotation_spec.display_name = {display_name}"
Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListExamplesResponse.next_page_token][google.cloud.datalabeling.v1beta1.ListExamplesResponse.next_page_token] of the previous [DataLabelingService.ListExamples] call. Return first page if empty.
Results of listing Examples in and annotated dataset.
The list of examples to return.
A token to retrieve next page of results.
Creates an annotation spec set by providing a set of labels.
Request message for CreateAnnotationSpecSet.
Required. AnnotationSpecSet resource parent, format: projects/{project_id}
Required. Annotation spec set to create. Annotation specs must be included. Only one annotation spec will be accepted for annotation specs with same display_name.
Gets an annotation spec set by resource name.
Request message for GetAnnotationSpecSet.
Required. AnnotationSpecSet resource name, format: projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}
Lists annotation spec sets for a project. Pagination is supported.
Request message for ListAnnotationSpecSets.
Required. Parent of AnnotationSpecSet resource, format: projects/{project_id}
Optional. Filter is not supported at this moment.
Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListAnnotationSpecSetsResponse.next_page_token][google.cloud.datalabeling.v1beta1.ListAnnotationSpecSetsResponse.next_page_token] of the previous [DataLabelingService.ListAnnotationSpecSets] call. Return first page if empty.
Results of listing annotation spec set under a project.
The list of annotation spec sets.
A token to retrieve next page of results.
Deletes an annotation spec set by resource name.
Request message for DeleteAnnotationSpecSet.
Required. AnnotationSpec resource name, format: `projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}`.
Creates an instruction for how data should be labeled.
Request message for CreateInstruction.
Required. Instruction resource parent, format: projects/{project_id}
Required. Instruction of how to perform the labeling task.
Gets an instruction by resource name.
Request message for GetInstruction.
Required. Instruction resource name, format: projects/{project_id}/instructions/{instruction_id}
Lists instructions for a project. Pagination is supported.
Request message for ListInstructions.
Required. Instruction resource parent, format: projects/{project_id}
Optional. Filter is not supported at this moment.
Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListInstructionsResponse.next_page_token][google.cloud.datalabeling.v1beta1.ListInstructionsResponse.next_page_token] of the previous [DataLabelingService.ListInstructions] call. Return first page if empty.
Results of listing instructions under a project.
The list of Instructions to return.
A token to retrieve next page of results.
Deletes an instruction object by resource name.
Request message for DeleteInstruction.
Required. Instruction resource name, format: projects/{project_id}/instructions/{instruction_id}
Gets an evaluation by resource name.
Request message for GetEvaluation.
Required. Name of the evaluation. Format: 'projects/{project_id}/datasets/{dataset_id}/evaluations/{evaluation_id}'
Searchs evaluations within a project. Supported filter: evaluation_job, evaluation_time.
Request message for SearchEvaluation.
Required. Evaluation search parent. Format: projects/{project_id}
Optional. Support filtering by model id, job state, start and end time. Format: "evaluation_job.evaluation_job_id = {evaluation_job_id} AND evaluation_job.evaluation_job_run_time_start = {timestamp} AND evaluation_job.evaluation_job_run_time_end = {timestamp} AND annotation_spec.display_name = {display_name}"
Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
Optional. A token identifying a page of results for the server to return. Typically obtained by [SearchEvaluationsResponse.next_page_token][google.cloud.datalabeling.v1beta1.SearchEvaluationsResponse.next_page_token] of the previous [DataLabelingService.SearchEvaluations] call. Return first page if empty.
Results of searching evaluations.
The list of evaluations to return.
A token to retrieve next page of results.
Searchs example comparisons in evaluation, in format of examples of both ground truth and prediction(s). It is represented as a search with evaluation id.
Request message of SearchExampleComparisons.
Required. Name of the Evaluation resource to search example comparison from. Format: projects/{project_id}/datasets/{dataset_id}/evaluations/{evaluation_id}
Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
Optional. A token identifying a page of results for the server to return. Typically obtained by [SearchExampleComparisons.next_page_token][] of the previous [DataLabelingService.SearchExampleComparisons] call. Return first page if empty.
Results of searching example comparisons.
A token to retrieve next page of results.
Creates an evaluation job.
Request message for CreateEvaluationJob.
Required. Evaluation job resource parent, format: projects/{project_id}.
Required. The evaluation job to create.
Updates an evaluation job.
Request message for UpdateEvaluationJob.
Required. Evaluation job that is going to be updated.
Optional. Mask for which field in evaluation_job should be updated.
Gets an evaluation job by resource name.
Request message for GetEvaluationJob.
Required. Name of the evaluation job. Format: 'projects/{project_id}/evaluationJobs/{evaluation_job_id}'
Pauses an evaluation job. Pausing a evaluation job that is already in PAUSED state will be a no-op.
Request message for PauseEvaluationJob.
Required. Name of the evaluation job that is going to be paused. Format: 'projects/{project_id}/evaluationJobs/{evaluation_job_id}'
Resumes a paused evaluation job. Deleted evaluation job can't be resumed. Resuming a running evaluation job will be a no-op.
Request message ResumeEvaluationJob.
Required. Name of the evaluation job that is going to be resumed. Format: 'projects/{project_id}/evaluationJobs/{evaluation_job_id}'
Stops and deletes an evaluation job.
Request message DeleteEvaluationJob.
Required. Name of the evaluation job that is going to be deleted. Format: 'projects/{project_id}/evaluationJobs/{evaluation_job_id}'
Lists all evaluation jobs within a project with possible filters. Pagination is supported.
Request message for ListEvaluationJobs.
Required. Evaluation resource parent. Format: "projects/{project_id}"
Optional. Only support filter by model id and job state. Format: "evaluation_job.model_id = {model_id} AND evaluation_job.state = {EvaluationJob::State}"
Optional. Requested page size. Server may return fewer results than requested. Default value is 100.
Optional. A token identifying a page of results for the server to return. Typically obtained by [ListEvaluationJobs.next_page_token][] of the previous [DataLabelingService.ListEvaluationJobs] call. Return first page if empty.
Results for listing evaluation jobs.
The list of evaluation jobs to return.
A token to retrieve next page of results.
AnnotatedDataset is a set holding annotations for data in a Dataset. Each labeling task will generate an AnnotatedDataset under the Dataset that the task is requested for.
Used as response type in: DataLabelingService.GetAnnotatedDataset
Used as field type in:
Output only. AnnotatedDataset resource name in format of: projects/{project_id}/datasets/{dataset_id}/annotatedDatasets/ {annotated_dataset_id}
Output only. The display name of the AnnotatedDataset. It is specified in HumanAnnotationConfig when user starts a labeling task. Maximum of 64 characters.
Output only. The description of the AnnotatedDataset. It is specified in HumanAnnotationConfig when user starts a labeling task. Maximum of 10000 characters.
Output only. Source of the annotation.
Output only. Type of the annotation. It is specified when starting labeling task.
Output only. Number of examples in the annotated dataset.
Output only. Number of examples that have annotation in the annotated dataset.
Output only. Per label statistics.
Output only. Time the AnnotatedDataset was created.
Output only. Additional information about AnnotatedDataset.
Output only. The names of any related resources that are blocking changes to the annotated dataset.
Metadata on AnnotatedDataset.
Used in:
Specific request configuration used when requesting the labeling task.
Configuration for image classification task.
Configuration for image bounding box and bounding poly task.
Configuration for image polyline task.
Configuration for image segmentation task.
Configuration for video classification task.
Configuration for video object detection task.
Configuration for video object tracking task.
Configuration for video event labeling task.
Configuration for text classification task.
Configuration for text entity extraction task.
HumanAnnotationConfig used when requesting the human labeling task for this AnnotatedDataset.
Annotation for Example. Each example may have one or more annotations. For example in image classification problem, each image might have one or more labels. We call labels binded with this image an Annotation.
Used in:
Output only. Unique name of this annotation, format is: projects/{project_id}/datasets/{dataset_id}/annotatedDatasets/{annotated_dataset}/examples/{example_id}/annotations/{annotation_id}
Output only. The source of the annotation.
Output only. This is the actual annotation value, e.g classification, bounding box values are stored here.
Output only. Annotation metadata, including information like votes for labels.
Output only. Sentiment for this annotation.
Additional information associated with the annotation.
Used in:
Metadata related to human labeling.
Used in:
This annotation describes negatively about the data.
This label describes positively about the data.
Specifies where is the answer from.
Used in:
,Answer is provided by a human contributor.
Container of information related to one annotation spec.
Used in:
, , , , , , , , , , , ,Required. The display name of the AnnotationSpec. Maximum of 64 characters.
Optional. User-provided description of the annotation specification. The description can be up to 10000 characters long.
AnnotationSpecSet is a collection of label definitions. For example, in image classification tasks, we define a set of labels, this set is called AnnotationSpecSet. AnnotationSpecSet is immutable upon creation.
Used as response type in: DataLabelingService.CreateAnnotationSpecSet, DataLabelingService.GetAnnotationSpecSet
Used as field type in:
,Output only. AnnotationSpecSet resource name, format: projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}
Required. The display name for AnnotationSpecSet defined by user. Maximum of 64 characters.
Optional. User-provided description of the annotation specification set. The description can be up to 10000 characters long.
Required. The actual spec set defined by the users.
Output only. The names of any related resources that are blocking changes to the annotation spec set.
Used in:
, ,Classification annotations in an image.
Bounding box annotations in an image.
Oriented bounding box. The box does not have to be parallel to horizontal line.
Bounding poly annotations in an image.
Polyline annotations in an image.
Segmentation annotations in an image.
Classification annotations in video shots.
Video object tracking annotation.
Video object detection annotation.
Video event annotation.
Classification for text.
Entity extraction for text.
General classification.
Annotation value for an example.
Used in:
Annotation value for image classification case.
Annotation value for image bounding box, oriented bounding box and polygon cases.
Annotation value for image polyline cases. Polyline here is different from BoundingPoly. It is formed by line segments connected to each other but not closed form(Bounding Poly). The line segments can cross each other.
Annotation value for image segmentation.
Annotation value for text classification case.
Annotation value for text entity extraction case.
Annotation value for video classification case.
Annotation value for video object detection and tracking case.
Annotation value for video event case.
Records a failed attempt.
Used in:
The BigQuery location for the input content.
Used in:
Required. BigQuery URI to a table, up to 2000 characters long. Accepted forms: BigQuery gs path e.g. bq://projectId.bqDatasetId.bqTableId
Options regarding evaluation between bounding boxes.
Used in:
Minimize IoU required to consider 2 bounding boxes are matched.
A bounding polygon in the image.
Used in:
,The bounding polygon vertices.
Config for image bounding poly (and bounding box) human labeling task.
Used in:
, ,Required. Annotation spec set resource name.
Optional. Instruction message showed on contributors UI.
Metadata for classification annotations.
Used in:
Whether the classification task is multi-label or not.
Used in:
Precision-recall curve.
Confusion matrix of the model running the classification. Not applicable when label filtering is specified in evaluation option.
Used in:
Used in:
The predicted annotation spec.
Number of items being predicted as this label.
A row in the confusion matrix.
Used in:
the original annotation spec of this row.
Info describing predicted label distribution.
Metadata of a CreateInstruction operation.
The name of the created Instruction. projects/{project_id}/instructions/{instruction_id}
Partial failures encountered. E.g. single files that couldn't be read. Status details field will contain standard GCP error details.
Timestamp when create instruction request was created.
Instruction from a CSV file.
Used in:
CSV file for the instruction. Only gcs path is allowed.
DataItem is a piece of data, without annotation. For example, an image.
Used as response type in: DataLabelingService.GetDataItem
Used as field type in:
Output only.
The image payload, a container of the image bytes/uri.
The text payload, a container of text content.
The video payload, a container of the video uri.
Output only. Name of the data item, in format of: projects/{project_id}/datasets/{dataset_id}/dataItems/{data_item_id}
Used in:
,Dataset is the resource to hold your data. You can request multiple labeling tasks for a dataset while each one will generate an AnnotatedDataset.
Used as response type in: DataLabelingService.CreateDataset, DataLabelingService.GetDataset
Used as field type in:
,Output only. Dataset resource name, format is: projects/{project_id}/datasets/{dataset_id}
Required. The display name of the dataset. Maximum of 64 characters.
Optional. User-provided description of the annotation specification set. The description can be up to 10000 characters long.
Output only. Time the dataset is created.
Output only. This is populated with the original input configs where ImportData is called. It is available only after the clients import data to this dataset.
Output only. The names of any related resources that are blocking changes to the dataset.
Output only. The number of data items in the dataset.
Describes an evaluation between 2 annotated datasets. Created by an evaluation plan.
Used as response type in: DataLabelingService.GetEvaluation
Used as field type in:
Resource name of an evaluation. Format: 'projects/{project_id}/datasets/{dataset_id}/evaluations/{evaluation_id}'
Options used in evaluation plan for creating the evaluation.
Output only. Timestamp when the evaluation plan triggered this evaluation flow.
Output only. Timestamp when this model evaluation was created.
Output only. Metrics of the evaluation.
Type of the annotation to compute metrics for in the groundtruth and annotation labeled dataset. Required for creation.
Output only. Count of items in groundtruth dataset included in this evaluation. Will be unset if annotation type is not applicable.
Used in:
,Vertical specific options for general metrics.
Defines an evaluation job that is triggered periodically to generate evaluations.
Used as response type in: DataLabelingService.CreateEvaluationJob, DataLabelingService.GetEvaluationJob, DataLabelingService.UpdateEvaluationJob
Used as field type in:
, ,Format: 'projects/{project_id}/evaluationJobs/{evaluation_job_id}'
Description of the job. The description can be up to 25000 characters long.
Describes the schedule on which the job will be executed. Minimum schedule unit is 1 day. The schedule can be either of the following types: * [Crontab](http://en.wikipedia.org/wiki/Cron#Overview) * English-like [schedule](https: //cloud.google.com/scheduler/docs/configuring/cron-job-schedules)
The versioned model that is being evaluated here. Only one job is allowed for each model name. Format: 'projects/*/models/*/versions/*'
Detailed config for running this eval job.
Name of the AnnotationSpecSet.
If a human annotation should be requested when some data don't have ground truth.
Output only. Any attempts with errors happening in evaluation job runs each time will be recorded here incrementally.
Timestamp when this evaluation job was created.
State of the job.
Used in:
Used in:
Required. Email of the user who will be receiving the alert.
If a single evaluation run's aggregate mean average precision is lower than this threshold, the alert will be triggered.
Used in:
config specific to different supported human annotation use cases.
Input config for data, gcs_source in the config will be the root path for data. Data should be organzied chronically under that path.
Config used to create evaluation.
Mappings between reserved keys for bigquery import and customized tensor names. Key is the reserved key, value is tensor name in the bigquery table. Different annotation type has different required key mapping. See user manual for more details: https: //docs.google.com/document/d/1bg1meMIBGY // 9I5QEoFoHSX6u9LsZQYBSmPt6E9SxqHZc/edit#heading=h.tfyjhxhvsqem
Max number of examples to collect in each period.
Percentage of examples to collect in each period. 0.1 means 10% of total examples will be collected, and 0.0 means no collection.
Alert config for the evaluation job. The alert will be triggered when its criteria is met.
Used in:
Common metrics covering most genernal cases.
Config for video event human labeling task.
Used in:
,Required. The list of annotation spec set resource name. Similar to video classification, we support selecting event from multiple AnnotationSpecSet at the same time.
An Example is a piece of data and its annotation. For example, an image with label "house".
Used as response type in: DataLabelingService.GetExample
Used as field type in:
,Output only. The data part of Example.
The image payload, a container of the image bytes/uri.
The text payload, a container of the text content.
The video payload, a container of the video uri.
Output only. Name of the example, in format of: projects/{project_id}/datasets/{dataset_id}/annotatedDatasets/ {annotated_dataset_id}/examples/{example_id}
Output only. Annotations for the piece of data in Example. One piece of data can have multiple annotations.
Metadata of an ExportData operation.
Output only. The name of dataset to be exported. "projects/*/datasets/*/Datasets/*"
Output only. Partial failures encountered. E.g. single files that couldn't be read. Status details field will contain standard GCP error details.
Output only. Timestamp when export dataset request was created.
Response used for ExportDataset longrunning operation.
Ouptut only. The name of dataset. "projects/*/datasets/*/Datasets/*"
Output only. Total number of examples requested to export
Output only. Number of examples exported successfully.
Output only. Statistic infos of labels in the exported dataset.
Output only. output_config in the ExportData request.
Export destination of the data.Only gcs path is allowed in output_uri.
Used in:
Required. The output uri of destination file.
Required. The format of the gcs destination. Only "text/csv" and "application/json" are supported.
Export folder destination of the data.
Used in:
Required. Cloud Storage directory to export data to.
Source of the Cloud Storage file to be imported.
Used in:
Required. The input URI of source file. This must be a Cloud Storage path (`gs://...`).
Required. The format of the source file. Only "text/csv" is supported.
Configuration for how human labeling task should be done.
Used in:
, , , , , , , , , , , , , , , ,Required except for LabelAudio case. Instruction resource name.
Required. A human-readable name for AnnotatedDataset defined by users. Maximum of 64 characters .
Optional. A human-readable description for AnnotatedDataset. The description can be up to 10000 characters long.
Optional. A human-readable label used to logically group labeling tasks. This string must match the regular expression `[a-zA-Z\\d_-]{0,128}`.
Optional. The Language of this question, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US. Only need to set this when task is language related. For example, French text classification or Chinese audio transcription.
Optional. Replication of questions. Each question will be sent to up to this number of contributors to label. Aggregated answers will be returned. Default is set to 1. For image related labeling, valid values are 1, 3, 5.
Optional. Maximum duration for contributors to answer a question. Default is 1800 seconds.
Optional. If you want your own labeling contributors to manage and work on this labeling request, you can set these contributors here. We will give them access to the question types in crowdcompute. Note that these emails must be registered in crowdcompute worker UI: https://crowd-compute.appspot.com/
Email of the user who started the labeling task and should be notified by email. If empty no notification will be sent.
Image bounding poly annotation. It represents a polygon including bounding box in the image.
Used in:
The region of the polygon. If it is a bounding box, it is guaranteed to be four points.
Label of object in this bounding polygon.
Image classification annotation definition.
Used in:
Label of image.
Config for image classification human labeling task.
Used in:
, ,Required. Annotation spec set resource name.
Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one image.
Optional. The type of how to aggregate answers.
Container of information about an image.
Used in:
,Image format.
A byte string of a thumbnail image.
Image uri from the user bucket.
Signed uri of the image file in the service bucket.
A polyline for the image annotation.
Used in:
Label of this polyline.
Image segmentation annotation.
Used in:
The mapping between rgb color and annotation spec. The key is the rgb color represented in format of rgb(0, 0, 0). The value is the AnnotationSpec.
Image format.
A byte string of a full image's color map.
Metadata of an ImportData operation.
Ouptut only. The name of imported dataset. "projects/*/datasets/*"
Output only. Partial failures encountered. E.g. single files that couldn't be read. Status details field will contain standard GCP error details.
Output only. Timestamp when import dataset request was created.
Response used for ImportData longrunning operation.
Ouptut only. The name of imported dataset.
Output only. Total number of examples requested to import
Output only. Number of examples imported successfully.
The configuration of input data, including data type, location, etc.
Used in:
, ,Optional. The metadata associated with each data type.
Required for text import, as language code must be specified.
Required. Where the data is from.
Source located in Cloud Storage.
Required. Data type must be specifed when user tries to import data.
Optional. If input contains annotation, user needs to specify the type and metadata of the annotation when creating it as an annotated dataset.
Optional. Metadata about annotations in the input. Each annotation type may have different metadata. Metadata for classification problem.
Instruction of how to perform the labeling task for human operators. Currently two types of instruction are supported - CSV file and PDF. One of the two types instruction must be provided. CSV file is only supported for image classification task. Instructions for other task should be provided as PDF. For image classification, CSV and PDF can be provided at the same time.
Used as response type in: DataLabelingService.GetInstruction
Used as field type in:
,Output only. Instruction resource name, format: projects/{project_id}/instructions/{instruction_id}
Required. The display name of the instruction. Maximum of 64 characters.
Optional. User-provided description of the instruction. The description can be up to 10000 characters long.
Output only. Creation time of instruction.
Output only. Last update time of instruction.
Required. The data type of this instruction.
One of CSV or PDF instruction is required. Instruction from a CSV file, such as for classification task. The CSV file should have exact two columns, in the following format: * The first column is labeled data, such as an image reference, text. * The second column is comma separated labels associated with data.
One of CSV or PDF instruction is required. Instruction from a PDF document. The PDF should be in a Cloud Storage bucket.
Output only. The names of any related resources that are blocking changes to the instruction.
Details of a LabelImageBoundingBox operation metadata.
Used in:
Basic human annotation config used in labeling request.
Details of LabelImageBoundingPoly operation metadata.
Used in:
Basic human annotation config used in labeling request.
Metadata of a LabelImageClassification operation.
Used in:
Basic human annotation config used in labeling request.
Details of a LabelImageOrientedBoundingBox operation metadata.
Used in:
Basic human annotation config.
Details of LabelImagePolyline operation metadata.
Used in:
Basic human annotation config used in labeling request.
Image labeling task feature.
Used in:
Label whole image with one or more of labels.
Label image with bounding boxes for labels.
Label oriented bounding box. The box does not have to be parallel to horizontal line.
Label images with bounding poly. A bounding poly is a plane figure that is bounded by a finite chain of straight line segments closing in a loop.
Label images with polyline. Polyline is formed by connected line segments which are not in closed form.
Label images with segmentation. Segmentation is different from bounding poly since it is more fine-grained, pixel level annotation.
Details of a LabelImageSegmentation operation metadata.
Used in:
Basic human annotation config.
Metadata of a labeling operation, such as LabelImage or LabelVideo. Next tag: 18
Ouptut only. Details of specific label operation.
Details of label image classification operation.
Details of label image bounding box operation.
Details of label image bounding poly operation.
Details of label image oriented bounding box operation.
Details of label image polyline operation.
Details of label image segmentation operation.
Details of label video classification operation.
Details of label video object detection operation.
Details of label video object tracking operation.
Details of label video event operation.
Details of label text classification operation.
Details of label text entity extraction operation.
Output only. Progress of label operation. Range: [0, 100].
Output only. Partial failures encountered. E.g. single files that couldn't be read. Status details field will contain standard GCP error details.
Output only. Timestamp when labeling request was created.
Statistics about annotation specs.
Used in:
,Map of each annotation spec's example count. Key is the annotation spec name and value is the number of examples for that annotation spec. If the annotated dataset does not have annotation spec, the map will return a pair where the key is empty string and value is the total number of annotations.
Details of a LabelTextClassification operation metadata.
Used in:
Basic human annotation config used in labeling request.
Details of a LabelTextEntityExtraction operation metadata.
Used in:
Basic human annotation config used in labeling request.
Text labeling task feature.
Used in:
Label text content to one of more labels.
Label entities and their span in text.
Details of a LabelVideoClassification operation metadata.
Used in:
Basic human annotation config used in labeling request.
Details of a LabelVideoEvent operation metadata.
Used in:
Basic human annotation config used in labeling request.
Details of a LabelVideoObjectDetection operation metadata.
Used in:
Basic human annotation config used in labeling request.
Details of a LabelVideoObjectTracking operation metadata.
Used in:
Basic human annotation config used in labeling request.
Video labeling task feature.
Used in:
Label whole video or video segment with one or more labels.
Label objects with bounding box on image frames extracted from the video.
Label and track objects in video.
Label the range of video for the specified events.
Normalized bounding polygon.
Used in:
,The bounding polygon normalized vertices.
Normalized polyline.
Used in:
The normalized polyline vertices.
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
Used in:
,X coordinate.
Y coordinate.
Config for video object detection human labeling task. Object detection will be conducted on the images extracted from the video, and those objects will be labeled with bounding boxes. User need to specify the number of images to be extracted per second as the extraction frame rate.
Used in:
, ,Required. Annotation spec set resource name.
Required. Number of frames per second to be extracted from the video.
Used in:
Precision-recall curve.
Config for video object tracking human labeling task.
Used in:
, ,Required. Annotation spec set resource name.
Video frame level annotation for object detection and tracking.
Used in:
The bounding box location of this object track for the frame.
The time offset of this frame relative to the beginning of the video.
General information useful for labels coming from contributors.
Used in:
Confidence score corresponding to a label. For examle, if 3 contributors have answered the question and 2 of them agree on the final label, the confidence score will be 0.67 (2/3).
The total number of contributors that answer this question.
The total number of contributors that choose this label.
Comments from contributors.
The configuration of output data.
Used in:
,Required. Location to output data to.
Output to a file in Cloud Storage. Should be used for labeling output other thanimage segmentation.
Output to a folder in Cloud Storage. Should be used for image segmentation labeling output.
Instruction from a PDF file.
Used in:
PDF file for the instruction. Only gcs path is allowed.
A line with multiple line segments.
Used in:
The polyline vertices.
Config for image polyline human labeling task.
Used in:
,Required. Annotation spec set resource name.
Optional. Instruction message showed on contributors UI.
Used in:
,PR curve against which annotation spec. Could be empty.
Area under precision recall curve.
entries to draw PR graph.
mean average prcision of this curve.
Used in:
Threshold used for this entry, for example, IoU threshold for bounding box problem, or detection threshold for classification.
Recall value.
Precision value.
Harmonic mean of recall and precision.
Recall value for entries with label that has highest score.
Precision value for entries with label that has highest score.
The harmonic mean of [recall_at1][google.cloud.datalabeling.v1beta1.PrCurve.ConfidenceMetricsEntry.recall_at1] and [precision_at1][google.cloud.datalabeling.v1beta1.PrCurve.ConfidenceMetricsEntry.precision_at1].
Recall value for entries with label that has highest 5 scores.
Precision value for entries with label that has highest 5 scores.
The harmonic mean of [recall_at5][google.cloud.datalabeling.v1beta1.PrCurve.ConfidenceMetricsEntry.recall_at5] and [precision_at5][google.cloud.datalabeling.v1beta1.PrCurve.ConfidenceMetricsEntry.precision_at5].
Example comparisons containing annotation comparison between groundtruth and predictions.
Used in:
Config for image segmentation
Used in:
,Required. Annotation spec set resource name. format: projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}
Instruction message showed on labelers UI.
Config for setting up sentiments.
Used in:
If set to true, contributors will have the option to select sentiment of the label they selected, to mark it as negative or positive label. Default is false.
Start and end position in a sequence (e.g. text segment).
Used in:
Start position (inclusive).
End position (exclusive).
Used in:
Majority vote to aggregate answers.
Unanimous answers will be adopted.
Preserve all answers by crowd compute.
Text classification annotation.
Used in:
Label of the text.
Config for text classification human labeling task.
Used in:
, ,Optional. If allow_multi_label is true, contributors are able to choose multiple labels for one text segment.
Required. Annotation spec set resource name.
Optional. Configs for sentiment selection.
Text entity extraction annotation.
Used in:
Label of the text entities.
Position of the entity.
Config for text entity extraction human labeling task.
Used in:
,Required. Annotation spec set resource name.
Metadata for the text.
Used in:
The language of this text, as a [BCP-47](https://www.rfc-editor.org/rfc/bcp/bcp47.txt). Default value is en-US.
Container of information about a piece of text.
Used in:
,Text content.
A time period inside of an example that has a time dimension (e.g. video).
Used in:
, ,Start of the time segment (inclusive), represented as the duration since the example start.
End of the time segment (exclusive), represented as the duration since the example start.
A vertex represents a 2D point in the image. NOTE: the vertex coordinates are in the same scale as the original image.
Used in:
,X coordinate.
Y coordinate.
Video classification annotation.
Used in:
The time segment of the video to which the annotation applies.
Label of the segment specified by time_segment.
Config for video classification human labeling task. Currently two types of video classification are supported: 1. Assign labels on the entire video. 2. Split the video into multiple video clips based on camera shot, and assign labels on each video clip.
Used in:
, ,Required. The list of annotation spec set configs. Since watching a video clip takes much longer time than an image, we support label with multiple AnnotationSpecSet at the same time. Labels in each AnnotationSpecSet will be shown in a group to contributors. Contributors can select one or more (depending on whether to allow multi label) from each group.
Optional. Option to apply shot detection on the video.
Annotation spec set with the setting of allowing multi labels or not.
Used in:
Required. Annotation spec set resource name.
Optional. If allow_multi_label is true, contributors are able to choose multiple labels from one annotation spec set.
Video event annotation.
Used in:
Label of the event in this annotation.
The time segment of the video to which the annotation applies.
Video object tracking annotation.
Used in:
Label of the object tracked in this annotation.
The time segment of the video to which object tracking applies.
The list of frames where this object track appears.
Container of information of a video.
Used in:
,Video format.
Video uri from the user bucket.
The list of video thumbnails.
FPS of the video.
Signed uri of the video file in the service bucket.
Container of information of a video thumbnail.
Used in:
A byte string of the video frame.
Time offset relative to the beginning of the video, corresponding to the video frame where the thumbnail has been extracted from.