Get desktop application:
View/edit binary Protocol Buffers messages
Service that implements Google Cloud Video Intelligence API.
Performs asynchronous video annotation. Progress and results can be retrieved through the `google.longrunning.Operations` interface. `Operation.metadata` contains `AnnotateVideoProgress` (progress). `Operation.response` contains `AnnotateVideoResponse` (results).
Video annotation request.
Input video location. Currently, only [Google Cloud Storage](https://cloud.google.com/storage/) URIs are supported, which must be specified in the following format: `gs://bucket-id/object-id` (other URI formats return [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see [Request URIs](/storage/docs/reference-uris). A video URI may include wildcards in `object-id`, and thus identify multiple videos. Supported wildcards: '*' to match 0 or more characters; '?' to match 1 character. If unset, the input video should be embedded in the request as `input_content`. If set, `input_content` should be unset.
The video data bytes. If unset, the input video(s) should be specified via `input_uri`. If set, `input_uri` should be unset.
Requested video annotation features.
Additional video context and/or feature-specific parameters.
Optional location where the output (in JSON format) should be stored. Currently, only [Google Cloud Storage](https://cloud.google.com/storage/) URIs are supported, which must be specified in the following format: `gs://bucket-id/object-id` (other URI formats return [google.rpc.Code.INVALID_ARGUMENT][google.rpc.Code.INVALID_ARGUMENT]). For more information, see [Request URIs](/storage/docs/reference-uris).
Optional cloud region where annotation should take place. Supported cloud regions: `us-east1`, `us-west1`, `europe-west1`, `asia-east1`. If no region is specified, a region will be determined based on video file location.
Video annotation progress. Included in the `metadata` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
Progress metadata for all videos specified in `AnnotateVideoRequest`.
Video annotation response. Included in the `response` field of the `Operation` returned by the `GetOperation` call of the `google::longrunning::Operations` service.
Annotation results for all videos specified in `AnnotateVideoRequest`.
Detected entity from video analysis.
Used in:
Opaque entity ID. Some IDs may be available in [Google Knowledge Graph Search API](https://developers.google.com/knowledge-graph/).
Textual description, e.g. `Fixed-gear bicycle`.
Language code for `description` in BCP-47 format.
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.
Used in:
All video frames where explicit content was detected.
Config for EXPLICIT_CONTENT_DETECTION.
Used in:
Model to use for explicit content detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".
Video frame level annotation results for explicit content.
Used in:
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
Likelihood of the pornography content..
Face annotation.
Used in:
Thumbnail of a representative face view (in JPEG format).
All video segments where a face was detected.
All video frames where a face was detected.
Config for FACE_DETECTION.
Used in:
Model to use for face detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".
Whether bounding boxes be included in the face annotation output.
Video frame level annotation results for face detection.
Used in:
Normalized Bounding boxes in a frame. There can be more than one boxes if the same face is detected in multiple locations within the current frame.
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
Video segment level annotation results for face detection.
Used in:
Video segment where a face was detected.
Video annotation feature.
Used in:
Unspecified.
Label detection. Detect objects, such as dog or flower.
Shot change detection.
Explicit content detection.
Human face detection and tracking.
Label annotation.
Used in:
Detected entity.
Common categories for the detected entity. E.g. when the label is `Terrier` the category is likely `dog`. And in some cases there might be more than one categories e.g. `Terrier` could also be a `pet`.
All video segments where a label was detected.
All video frames where a label was detected.
Config for LABEL_DETECTION.
Used in:
What labels should be detected with LABEL_DETECTION, in addition to video-level labels or segment-level labels. If unspecified, defaults to `SHOT_MODE`.
Whether the video has been shot from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Should be used with `SHOT_AND_FRAME_MODE` enabled.
Model to use for label detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".
Label detection mode.
Used in:
Unspecified.
Detect shot-level labels.
Detect frame-level labels.
Detect both shot-level and frame-level labels.
Video frame level annotation results for label detection.
Used in:
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
Confidence that the label is accurate. Range: [0, 1].
Video segment level annotation results for label detection.
Used in:
Video segment where a label was detected.
Confidence that the label is accurate. Range: [0, 1].
Bucketized representation of likelihood.
Used in:
Unspecified likelihood.
Very unlikely.
Unlikely.
Possible.
Likely.
Very likely.
Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1].
Used in:
Left X coordinate.
Top Y coordinate.
Right X coordinate.
Bottom Y coordinate.
Config for SHOT_CHANGE_DETECTION.
Used in:
Model to use for shot change detection. Supported values: "builtin/stable" (the default if unset) and "builtin/latest".
Annotation progress for a single video.
Used in:
Video file location in [Google Cloud Storage](https://cloud.google.com/storage/).
Approximate percentage processed thus far. Guaranteed to be 100 when fully processed.
Time when the request was received.
Time of the most recent update.
Annotation results for a single video.
Used in:
Video file location in [Google Cloud Storage](https://cloud.google.com/storage/).
Label annotations on video level or user specified segment level. There is exactly one element for each unique label.
Label annotations on shot level. There is exactly one element for each unique label.
Label annotations on frame level. There is exactly one element for each unique label.
Face annotations. There is exactly one element for each unique face.
Shot annotations. Each shot is represented as a video segment.
Explicit content annotation.
If set, indicates an error. Note that for a single `AnnotateVideoRequest` some videos may succeed and some may fail.
Video context and/or feature-specific parameters.
Used in:
Video segments to annotate. The segments may overlap and are not required to be contiguous or span the whole video. If unspecified, each video is treated as a single segment.
Config for LABEL_DETECTION.
Config for SHOT_CHANGE_DETECTION.
Config for EXPLICIT_CONTENT_DETECTION.
Config for FACE_DETECTION.
Video segment.
Used in:
, , ,Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).