Get desktop application:
View/edit binary Protocol Buffers messages
Service that implements streaming Google Cloud Video Intelligence API.
Performs video annotation with bidirectional streaming: emitting results while sending video/audio bytes. This method is only available via the gRPC API (not REST).
The top-level message sent by the client for the `StreamingAnnotateVideo` method. Multiple `StreamingAnnotateVideoRequest` messages are sent. The first message must only contain a `StreamingVideoConfig` message. All subsequent messages must only contain `input_content` data.
*Required* The streaming request, which is either a streaming config or video content.
Provides information to the annotator, specifing how to process the request. The first `AnnotateStreamingVideoRequest` message must only contain a `video_config` message.
The video data to be annotated. Chunks of video data are sequentially sent in `StreamingAnnotateVideoRequest` messages. Except the initial `StreamingAnnotateVideoRequest` message containing only `video_config`, all subsequent `AnnotateStreamingVideoRequest` messages must only contain `input_content` field.
`StreamingAnnotateVideoResponse` is the only message returned to the client by `StreamingAnnotateVideo`. A series of zero or more `StreamingAnnotateVideoResponse` messages are streamed back to the client.
If set, returns a [google.rpc.Status][] message that specifies the error for the operation.
Streaming annotation results.
GCS URI that stores annotation results of one streaming session. It is a directory that can hold multiple files in JSON format. Example uri format: gs://bucket_id/object_id/cloud_project_name-session_id
Detected entity from video analysis.
Used in:
,Opaque entity ID. Some IDs may be available in [Google Knowledge Graph Search API](https://developers.google.com/knowledge-graph/).
Textual description, e.g. `Fixed-gear bicycle`.
Language code for `description` in BCP-47 format.
Explicit content annotation (based on per-frame visual signals only). If no explicit content has been detected in a frame, no annotations are present for that frame.
Used in:
All video frames where explicit content was detected.
Video frame level annotation results for explicit content.
Used in:
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
Likelihood of the pornography content..
Label annotation.
Used in:
Detected entity.
Common categories for the detected entity. E.g. when the label is `Terrier` the category is likely `dog`. And in some cases there might be more than one categories e.g. `Terrier` could also be a `pet`.
All video segments where a label was detected.
All video frames where a label was detected.
Video frame level annotation results for label detection.
Used in:
Time-offset, relative to the beginning of the video, corresponding to the video frame for this location.
Confidence that the label is accurate. Range: [0, 1].
Video segment level annotation results for label detection.
Used in:
Video segment where a label was detected.
Confidence that the label is accurate. Range: [0, 1].
Bucketized representation of likelihood.
Used in:
Unspecified likelihood.
Very unlikely.
Unlikely.
Possible.
Likely.
Very likely.
Normalized bounding box. The normalized vertex coordinates are relative to the original image. Range: [0, 1].
Used in:
Left X coordinate.
Top Y coordinate.
Right X coordinate.
Bottom Y coordinate.
Annotations corresponding to one tracked object.
Used in:
Entity to specify the object category that this track is labeled as.
Object category's labeling confidence of this track.
Information corresponding to all frames where this object track appears. Non-streaming batch mode: it may be one or multiple ObjectTrackingFrame messages in frames. Streaming mode: it can only be one ObjectTrackingFrame message in frames.
Different representation of tracking info in non-streaming batch and streaming modes.
Non-streaming batch mode ONLY. Each object track corresponds to one video segment where it appears.
Streaming mode ONLY. In streaming mode, we do not know the end time of a tracked object before it is completed. Hence, there is no VideoSegment info returned. Instead, we provide a unique identifiable integer track_id so that the customers can correlate the results of the ongoing ObjectTrackAnnotation of the same track_id over time.
Video frame level annotations for object detection and tracking. This field stores per frame location, time offset, and confidence.
Used in:
The normalized bounding box location of this object track for the frame.
The timestamp of the frame in microseconds.
Config for STREAMING_AUTOML_CLASSIFICATION.
Used in:
Resource name of AutoML model. Format: `projects/{project_id}/locations/{location_id}/models/{model_id}`
Config for STREAMING_AUTOML_OBJECT_TRACKING.
Used in:
Resource name of AutoML model. Format: `projects/{project_id}/locations/{location_id}/models/{model_id}`
Config for EXPLICIT_CONTENT_DETECTION in streaming mode.
No customized config support.
Used in:
(message has no fields)
Streaming video annotation feature.
Used in:
Unspecified.
Label detection. Detect objects, such as dog or flower.
Shot change detection.
Explicit content detection.
Object detection and tracking.
Video classification based on AutoML model.
Object detection and tracking based on AutoML model.
Config for LABEL_DETECTION in streaming mode.
Used in:
Whether the video has been captured from a stationary (i.e. non-moving) camera. When set to true, might improve detection accuracy for moving objects. Default: false.
Config for STREAMING_OBJECT_TRACKING.
No customized config support.
Used in:
(message has no fields)
Config for SHOT_CHANGE_DETECTION in streaming mode.
No customized config support.
Used in:
(message has no fields)
Config for streaming storage option.
Used in:
Enable streaming storage. Default: false.
GCS URI to store all annotation results for one client. Client should specify this field as the top-level storage directory. Annotation results of different sessions will be put into different sub-directories denoted by project_name and session_id. All sub-directories will be auto generated by program and will be made accessible to client in response proto. URIs must be specified in the following format: `gs://bucket-id/object-id` `bucket-id` should be a valid GCS bucket created by client and bucket permission shall also be configured properly. `object-id` can be arbitrary string that make sense to client. Other URI formats will return error and cause GCS write failure.
Streaming annotation results corresponding to a portion of the video that is currently being processed.
Used in:
Shot annotation results. Each shot is represented as a video segment.
Label annotation results.
Explicit content detection results.
Object tracking results.
Provides information to the annotator that specifies how to process the request.
Used in:
Requested annotation feature.
Config for requested annotation feature.
Config for SHOT_CHANGE_DETECTION.
Config for LABEL_DETECTION.
Config for STREAMING_EXPLICIT_CONTENT_DETECTION.
Config for STREAMING_OBJECT_TRACKING.
Config for STREAMING_AUTOML_CLASSIFICATION.
Config for STREAMING_AUTOML_OBJECT_TRACKING.
Streaming storage option. By default: storage is disabled.
Video segment.
Used in:
, ,Time-offset, relative to the beginning of the video, corresponding to the start of the segment (inclusive).
Time-offset, relative to the beginning of the video, corresponding to the end of the segment (inclusive).