Get desktop application:
View/edit binary Protocol Buffers messages
Service that implements Google Cloud Video Intelligence API.
Performs asynchronous video annotation. Progress and results can be retrieved through the `google.longrunning.Operations` interface. `Operation.metadata` contains `AnnotateVideoProgress` (progress). `Operation.response` contains `AnnotateVideoResponse` (results).
Video annotation request.
Input video location. Currently, only [Google Cloud Storage](https://cloud.google.com/storage/) URIs are supported, which must be specified in the following format: `gs://bucket-id/object-id` (other URI formats return [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see [Request URIs](/storage/docs/reference-uris). A video URI may include wildcards in `object-id`, and thus identify multiple videos. Supported wildcards: '*' to match 0 or more characters; '?' to match 1 character. If unset, the input video should be embedded in the request as `input_content`. If set, `input_content` should be unset.
The video data bytes. If unset, the input video(s) should be specified via `input_uri`. If set, `input_uri` should be unset.
Requested video annotation features.
Additional video context and/or feature-specific parameters.
Optional location where the output (in JSON format) should be stored. Currently, only [Google Cloud Storage](https://cloud.google.com/storage/) URIs are supported, which must be specified in the following format: `gs://bucket-id/object-id` (other URI formats return [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see [Request URIs](/storage/docs/reference-uris).
Optional cloud region where annotation should take place. Supported cloud regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region is specified, a region will be determined based on video file location.
Video annotation progress. Included in the `metadata` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
Progress metadata for all videos specified in `AnnotateVideoRequest`.
Video annotation response. Included in the `response` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
Annotation results for all videos specified in `AnnotateVideoRequest`.
Detected entity from video analysis.
Used in:
,Opaque entity ID. Some IDs may be available in [Google Knowledge Graph Search API](https://developers.google.com/knowledge-graph/).
Textual description, e.g. `Fixed-gear bicycle`.
Language code for `description` in BCP-47 format.
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.
Used in:
All video frames where explicit content was detected.
Config for EXPLICIT_CONTENT_DETECTION.
Used in:
Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".
Video frame level annotation results for explicit content.
Used in:
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
Likelihood of the pornography content..
Video annotation feature.
Used in:
Unspecified.
Label detection. Detect objects, such as dog or flower.
Shot change detection.
Explicit content detection.
OCR text detection and tracking.
Object detection and tracking.
Label annotation.
Used in:
Detected entity.
Common categories for the detected entity. E.g. when the label is `Terrier` the category is likely `dog`. And in some cases there might be more than one categories e.g. `Terrier` could also be a `pet`.
All video segments where a label was detected.
All video frames where a label was detected.
Config for LABEL_DETECTION.
Used in:
What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to `SHOT_MODE`.
Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with `SHOT_AND_FRAME_MODE` enabled.
Model to use for label detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".
Label detection mode.
Used in:
Unspecified.
Detect shot-level labels.
Detect frame-level labels.
Detect both shot-level and frame-level labels.
Video frame level annotation results for label detection.
Used in:
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
Confidence that the label is accurate. Range: [0, 1].
Video segment level annotation results for label detection.
Used in:
Video segment where a label was detected.
Confidence that the label is accurate. Range: [0, 1].
Bucketized representation of likelihood.
Used in:
Unspecified likelihood.
Very unlikely.
Unlikely.
Possible.
Likely.
Very likely.
Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1].
Used in:
Left X coordinate.
Top Y coordinate.
Right X coordinate.
Bottom Y coordinate.
Normalized bounding polygon for text (that might not be aligned with axis). Contains list of the corner points in clockwise order starting from top-left corner. For example, for a rectangular bounding box: When the text is horizontal it might look like: 0----1 | | 3----2 When it's clockwise rotated 180 degrees around the top-left corner it becomes: 2----3 | | 1----0 and the vertex order will still be (0, 1, 2, 3). Note that values can be less than 0, or greater than 1 due to trignometric calculations for location of the box.
Used in:
Normalized vertices of the bounding polygon.
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
Used in:
X coordinate.
Y coordinate.
Annotations corresponding to one tracked object.
Used in:
Entity to specify the object category that this track is labeled as.
Object category's labeling confidence of this track.
Information corresponding to all frames where this object track appears.
Each object track corresponds to one video segment where it appears.
Video frame level annotations for object detection and tracking. This field stores per frame location, time offset, and confidence.
Used in:
The normalized bounding box location of this object track for the frame.
The timestamp of the frame in microseconds.
Config for SHOT_CHANGE_DETECTION.
Used in:
Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".
Annotations related to one detected OCR text snippet. This will contain the corresponding text, confidence value, and frame level information for each detection.
Used in:
The detected text.
All video segments where OCR detected text appears.
Config for TEXT_DETECTION.
Used in:
Language hint can be specified if the language to be detected is known a priori. It can increase the accuracy of the detection. Language hint must be language code in BCP-47 format. Automatic language detection is performed if no hint is provided.
Video frame level annotation results for text annotation (OCR). Contains information regarding timestamp and bounding box locations for the frames containing detected OCR text snippets.
Used in:
Bounding polygon of the detected text for this frame.
Timestamp of this frame.
Video segment level annotation results for text detection.
Used in:
Video segment where a text snippet was detected.
Confidence for the track of detected text. It is calculated as the highest over all frames where OCR detected text appears.
Information related to the frames where OCR detected text appears.
Annotation progress for a single video.
Used in:
Video file location in [Google Cloud Storage](https://cloud.google.com/storage/).
Approximate percentage processed thus far. Guaranteed to be 100 when fully processed.
Time when the request was received.
Time of the most recent update.
Annotation results for a single video.
Used in:
Video file location in [Google Cloud Storage](https://cloud.google.com/storage/).
Label annotations on video level or user specified segment level. There is exactly one element for each unique label.
Label annotations on shot level. There is exactly one element for each unique label.
Label annotations on frame level. There is exactly one element for each unique label.
Shot annotations. Each shot is represented as a video segment.
Explicit content annotation.
OCR text detection and tracking. Annotations for list of detected text snippets. Each will have list of frame information associated with it.
Annotations for list of objects detected and tracked in video.
If set, indicates an error. Note that for a single `AnnotateVideoRequest` some videos may succeed and some may fail.
Video context and/or feature-specific parameters.
Used in:
Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.
Config for LABEL_DETECTION.
Config for SHOT_CHANGE_DETECTION.
Config for EXPLICIT_CONTENT_DETECTION.
Config for TEXT_DETECTION.
Video segment.
Used in:
, , , ,Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).