Get desktop application:
View/edit binary Protocol Buffers messages
Message describing AI-enabled Devices Input Config.
Used in:
(message has no fields)
Represents a hardware accelerator type.
Used in:
Unspecified accelerator type, which means no accelerator.
Nvidia Tesla K80 GPU.
Nvidia Tesla P100 GPU.
Nvidia Tesla V100 GPU.
Nvidia Tesla P4 GPU.
Nvidia Tesla T4 GPU.
Nvidia Tesla A100 GPU.
TPU v2.
TPU v3.
Message describing the Analysis object.
Used in:
The name of resource.
Output only. The create timestamp.
Output only. The update timestamp.
Labels as key value pairs.
The definition of the analysis.
Map from the input parameter in the definition to the real stream. E.g., suppose you had a stream source operator named "input-0" and you try to receive from the real stream "stream-0". You can add the following mapping: [input-0: stream-0].
Map from the output parameter in the definition to the real stream. E.g., suppose you had a stream sink operator named "output-0" and you try to send to the real stream "stream-0". You can add the following mapping: [output-0: stream-0].
Boolean flag to indicate whether you would like to disable the ability to automatically start a Process when new event happening in the input Stream. If you would like to start a Process manually, the field needs to be set to true.
The CloudEvent raised when an Analysis is created.
The data associated with the event.
Defines a full analysis. This is a description of the overall live analytics pipeline. You may think of this as an edge list representation of a multigraph. This may be directly authored by a human in protobuf textformat, or it may be generated by a programming API (perhaps Python or JavaScript depending on context).
Used in:
Analyzer definitions.
The CloudEvent raised when an Analysis is deleted.
The data associated with the event.
The data within all Analysis events.
Used in:
, ,Optional. The Analysis event payload. Unset for deletion events.
The CloudEvent raised when an Analysis is updated.
The data associated with the event.
Defines an Analyzer. An analyzer processes data from its input streams using the logic defined in the Operator that it represents. Of course, it produces data for the output streams declared in the Operator.
Used in:
The name of this analyzer. Tentatively [a-z][a-z0-9]*(_[a-z0-9]+)*.
The name of the operator that this analyzer runs. Must match the name of a supported operator.
Input streams.
The attribute values that this analyzer applies to the operator. Supply a mapping between the attribute names and the actual value you wish to apply. If an attribute name is omitted, then it will take a preconfigured default value.
Debug options.
Options available for debugging purposes only.
Used in:
Environment variables.
The inputs to this analyzer. We accept input name references of the following form: <analyzer-name>:<output-argument-name> Example: Suppose you had an operator named "SomeOp" that has 2 output arguments, the first of which is named "foo" and the second of which is named "bar", and an operator named "MyOp" that accepts 2 inputs. Also suppose that there is an analyzer named "some-analyzer" that is running "SomeOp" and another analyzer named "my-analyzer" running "MyOp". To indicate that "my-analyzer" is to consume "some-analyzer"'s "foo" output as its first input and "some-analyzer"'s "bar" output as its second input, you can set this field to the following: input = ["some-analyzer:foo", "some-analyzer:bar"]
Used in:
The name of the stream input (as discussed above).
Message describing Application object
Used in:
name of resource
Output only. [Output only] Create timestamp
Output only. [Output only] Update timestamp
Labels as key value pairs
Required. A user friendly display name for the solution.
A description for this application.
Application graph configuration.
Output only. Application graph runtime info. Only exists when application state equals to DEPLOYED.
Output only. State of the application.
Billing mode of the application.
Message storing the runtime information of the application.
Used in:
Timestamp when the engine be deployed
Globally created resources like warehouse dataschemas.
Monitoring-related configuration for this application.
Message about output resources from application.
Used in:
The full resource name of the outputted resources.
The name of graph node who produces the output resource name. For example: output_resource: /projects/123/locations/us-central1/corpora/my-corpus/dataSchemas/my-schema producer_node: occupancy-count
The key of the output resource, it has to be unique within the same producer node. One producer node can output several output resources, the key can be used to match corresponding output resources.
Monitoring-related configuration for an application.
Used in:
Whether this application has monitoring enabled.
Billing mode of the Application
Used in:
The default value.
Pay as you go billing mode.
Monthly billing mode.
State of the Application
Used in:
The default value. This value is used if the state is omitted.
State CREATED.
State DEPLOYING.
State DEPLOYED.
State UNDEPLOYING.
State DELETED.
State ERROR.
State CREATING.
State Updating.
State Deleting.
State Fixing.
Message storing the graph of the application.
Used in:
,A list of nodes in the application graph.
The CloudEvent raised when an Application is created.
The data associated with the event.
The CloudEvent raised when an Application is deleted.
The data associated with the event.
The data within all Application events.
Used in:
, ,Optional. The Application event payload. Unset for deletion events.
The CloudEvent raised when an Application is updated.
The data associated with the event.
Represents an actual value of an operator attribute.
Used in:
Attribute value.
int.
float.
bool.
string.
The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.
Used in:
Required. The resource metric name. Supported metrics: * For Online Prediction: * `aiplatform.googleapis.com/prediction/online/accelerator/duty_cycle` * `aiplatform.googleapis.com/prediction/online/cpu/utilization`
The target resource utilization in percentage (1% - 100%) for the given metric; once the real usage deviates from the target by a certain percentage, the machine replicas change. The default value is 60 (representing 60%) if not provided.
Message of configurations for BigQuery processor.
Used in:
BigQuery table resource for Vision AI Platform to ingest annotations to.
Data Schema By default, Vision AI Application will try to write annotations to the target BigQuery table using the following schema: ingestion_time: TIMESTAMP, the ingestion time of the original data. application: STRING, name of the application which produces the annotation. instance: STRING, Id of the instance which produces the annotation. node: STRING, name of the application graph node which produces the annotation. annotation: STRING or JSON, the actual annotation protobuf will be converted to json string with bytes field as 64 encoded string. It can be written to both String or Json type column. To forward annotation data to an existing BigQuery table, customer needs to make sure the compatibility of the schema. The map maps application node name to its corresponding cloud function endpoint to transform the annotations directly to the google.cloud.bigquery.storage.v1.AppendRowsRequest (only avro_rows or proto_rows should be set). If configured, annotations produced by corresponding application node will sent to the Cloud Function at first before be forwarded to BigQuery. If the default table schema doesn't fit, customer is able to transform the annotation output from Vision AI Application to arbitrary BigQuery table schema with CloudFunction. * The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of Vision AI annotation. * The cloud function should return AppPlatformCloudFunctionResponse with AppendRowsRequest stored in the annotations field. * To drop the annotation, simply clear the annotations field in the returned AppPlatformCloudFunctionResponse.
If true, App Platform will create the BigQuery DataSet and the BigQuery Table with default schema if the specified table doesn't exist. This doesn't work if any cloud function customized schema is specified since the system doesn't know your desired schema. JSON column will be used in the default table created by App Platform.
Message describing the Cluster object.
Used in:
Output only. Name of the resource.
Output only. The create timestamp.
Output only. The update timestamp.
Labels as key value pairs
Annotations to allow clients to store small amounts of arbitrary data.
Output only. The DNS name of the data plane service
Output only. The current state of the cluster.
Output only. The private service connection service target name.
The current state of the cluster.
Used in:
Not set.
The PROVISIONING state indicates the cluster is being created.
The RUNNING state indicates the cluster has been created and is fully usable.
The STOPPING state indicates the cluster is being deleted.
The ERROR state indicates the cluster is unusable. It will be automatically deleted.
The CloudEvent raised when a Cluster is created.
The data associated with the event.
The CloudEvent raised when a Cluster is deleted.
The data associated with the event.
The data within all Cluster events.
Used in:
, ,Optional. The Cluster event payload. Unset for deletion events.
The CloudEvent raised when a Cluster is updated.
The data associated with the event.
Describes the source info for a custom processor.
Used in:
The path where App Platform loads the artifacts for the custom processor.
The resource name original model hosted in the vertex AI platform.
The original product which holds the custom processor's functionality.
Output only. Additional info related to the imported custom processor. Data is filled in by app platform during the processor creation.
Model schema files which specifies the signature of the model. For VERTEX_CUSTOM models, instances schema is required. If instances schema is not specified during the processor creation, VisionAI Platform will try to get it from Vertex, if it doesn't exist, the creation will fail.
The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject).
Used in:
Cloud Storage location to a YAML file that defines the format of a single instance used in prediction and explanation requests.
Cloud Storage location to a YAML file that defines the prediction and explanation parameters.
Cloud Storage location to a YAML file that defines the format of a single prediction or explanation.
Source type of the imported custom processor.
Used in:
Source type unspecified.
Custom processors coming from Vertex AutoML product.
Custom processors coming from general custom models from Vertex.
Source for Product Recognizer.
All supported data types.
Used in:
,The default value of DataType.
Video data type like H264.
Image data type.
Protobuf data type, usually used for general data blob.
A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.
Used in:
Required. Immutable. The specification of a single machine used by the prediction.
Required. Immutable. The minimum number of machine replicas this DeployedModel will be always deployed on. This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed.
Immutable. The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases. If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use [min_replica_count][google.cloud.visionai.v1.DedicatedResources.min_replica_count] as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type).
Immutable. The metric specifications that overrides a resource utilization metric (CPU utilization, accelerator's duty cycle, and so on) target value (default to 60 if not set). At most one entry is allowed per metric. If [machine_spec.accelerator_count][google.cloud.visionai.v1.MachineSpec.accelerator_count] is above 0, the autoscaling will be based on both CPU utilization and accelerator's duty cycle metrics and scale up when either metrics exceeds its target value while scale down if both metrics are under their target value. The default target value is 60 for both metrics. If [machine_spec.accelerator_count][google.cloud.visionai.v1.MachineSpec.accelerator_count] is 0, the autoscaling will be based on CPU utilization metric only with default target value 60 if not explicitly set. For example, in the case of Online Prediction, if you want to override target CPU utilization to 80, you should set [autoscaling_metric_specs.metric_name][google.cloud.visionai.v1.AutoscalingMetricSpec.metric_name] to `aiplatform.googleapis.com/prediction/online/cpu/utilization` and [autoscaling_metric_specs.target][google.cloud.visionai.v1.AutoscalingMetricSpec.target] to `80`.
Message describing Draft object
Used in:
name of resource
Output only. [Output only] Create timestamp
Output only. [Output only] Create timestamp
Labels as key value pairs
Required. A user friendly display name for the solution.
A description for this application.
The draft application configs which haven't been updated to an application.
The CloudEvent raised when a Draft is created.
The data associated with the event.
The CloudEvent raised when a Draft is deleted.
The data associated with the event.
The data within all Draft events.
Used in:
, ,Optional. The Draft event payload. Unset for deletion events.
The CloudEvent raised when a Draft is updated.
The data associated with the event.
Message describing the Event object.
Used in:
Name of the resource.
Output only. The create timestamp.
Output only. The update timestamp.
Labels as key value pairs.
Annotations to allow clients to store small amounts of arbitrary data.
The clock used for joining streams.
Grace period for cleaning up the event. This is the time the controller waits for before deleting the event. During this period, if there is any active channel on the event. The deletion of the event after grace_period will be ignored.
Clock that will be used for joining streams.
Used in:
Clock is not specified.
Use the timestamp when the data is captured. Clients need to sync the clock.
Use the timestamp when the data is received.
The CloudEvent raised when an Event is created.
The data associated with the event.
The CloudEvent raised when an Event is deleted.
The data associated with the event.
The data within all Event events.
Used in:
, ,Optional. The Event event payload. Unset for deletion events.
The CloudEvent raised when an Event is updated.
The data associated with the event.
The Google Cloud Storage location for the input content.
Used in:
Required. References to a Google Cloud Storage paths.
Message of configurations for General Object Detection processor.
Used in:
(message has no fields)
Specification of a single machine.
Used in:
Immutable. The type of the machine. See the [list of machine types supported for prediction](https://cloud.google.com/vertex-ai/docs/predictions/configure-compute#machine-types) See the [list of machine types supported for custom training](https://cloud.google.com/vertex-ai/docs/training/configure-compute#machine-types). For [DeployedModel][] this field is optional, and the default value is `n1-standard-2`. For [BatchPredictionJob][] or as part of [WorkerPoolSpec][] this field is required.
Immutable. The type of accelerator(s) that may be attached to the machine as per [accelerator_count][google.cloud.visionai.v1.MachineSpec.accelerator_count].
The number of accelerators to attach to the machine.
Message describing MediaWarehouseConfig.
Used in:
Resource name of the Media Warehouse corpus. Format: projects/${project_id}/locations/${location_id}/corpora/${corpus_id}
Deprecated.
The duration for which all media assets, associated metadata, and search documents can exist.
All the supported model types in Vision AI App Platform.
Used in:
Processor Type UNSPECIFIED.
Model Type Image Classification.
Model Type Object Detection.
Model Type Video Classification.
Model Type Object Tracking.
Model Type Action Recognition.
Model Type Occupancy Counting.
Model Type Person Blur.
Model Type Vertex Custom.
Message describing node object.
Used in:
By default, the output of the node will only be available to downstream nodes. To consume the direct output from the application node, the output must be sent to Vision AI Streams at first. By setting output_all_output_channels_to_stream to true, App Platform will automatically send all the outputs of the current node to Vision AI Stream resources (one stream per output channel). The output stream resource will be created by App Platform automatically during deployment and deleted after application un-deployment. Note that this config applies to all the Application Instances. The output stream can be override at instance level by configuring the `output_resources` section of Instance resource. `producer_node` should be current node, `output_resource_binding` should be the output channel name (or leave it blank if there is only 1 output channel of the processor) and `output_resource` should be the target output stream.
Required. A unique name for the node.
A user friendly display name for the node.
Node config.
Processor name refer to the chosen processor resource.
Parent node. Input node should not have parent node. For V1 Alpha1/Beta only media warehouse node can have multiple parents, other types of nodes will only have one parent.
Message describing one edge pointing into a node.
Used in:
The name of the parent node.
The connected output artifact of the parent node. It can be omitted if target processor only has 1 output artifact.
The connected input channel of the current node's processor. It can be omitted if target processor only has 1 input channel.
Normalized Polygon.
Used in:
The bounding polygon normalized vertices. Top left corner of the image will be [0, 0].
Normalized Pplyline, which represents a curve consisting of connected straight-line segments.
Used in:
A sequence of vertices connected by straight lines.
A vertex represents a 2D point in the image. NOTE: the normalized vertex coordinates are relative to the original image and range from 0 to 1.
Used in:
,X coordinate.
Y coordinate.
Message describing OccupancyCountConfig.
Used in:
Whether to count the appearances of people, output counts have 'people' as the key.
Whether to count the appearances of vehicles, output counts will have 'vehicle' as the key.
Whether to track each invidual object's loitering time inside the scene or specific zone.
Message describing FaceBlurConfig.
Used in:
Person blur type.
Whether only blur faces other than the whole object in the processor.
Type of Person Blur
Used in:
PersonBlur Type UNSPECIFIED.
FaceBlur Type full occlusion.
FaceBlur Type blur filter.
Message describing PersonVehicleDetectionConfig.
Used in:
At least one of enable_people_counting and enable_vehicle_counting fields must be set to true. Whether to count the appearances of people, output counts have 'people' as the key.
Whether to count the appearances of vehicles, output counts will have 'vehicle' as the key.
Message describing PersonalProtectiveEquipmentDetectionConfig.
Used in:
Whether to enable face coverage detection.
Whether to enable head coverage detection.
Whether to enable hands coverage detection.
Message describing the Process object.
Used in:
The name of resource.
Output only. The create timestamp.
Output only. The update timestamp.
Required. Reference to an existing Analysis resource.
Optional. Attribute overrides of the Analyzers. Format for each single override item: "{analyzer_name}:{attribute_key}={value}"
Optional. Status of the Process.
Optional. Run mode of the Process.
Optional. Event ID of the input/output streams. This is useful when you have a StreamSource/StreamSink operator in the Analysis, and you want to manually specify the Event to read from/write to.
Optional. Optional: Batch ID of the Process.
Optional. Optional: The number of retries for a process in submission mode the system should try before declaring failure. By default, no retry will be performed.
The CloudEvent raised when a Process is created.
The data associated with the event.
The CloudEvent raised when a Process is deleted.
The data associated with the event.
The data within all Process events.
Used in:
, ,Optional. The Process event payload. Unset for deletion events.
The CloudEvent raised when a Process is updated.
The data associated with the event.
Message describing Processor object. Next ID: 19
Used in:
name of resource.
Output only. [Output only] Create timestamp.
Output only. [Output only] Update timestamp.
Labels as key value pairs.
Required. A user friendly display name for the processor.
Illustrative sentences for describing the functionality of the processor.
Output only. Processor Type.
Model Type.
Source info for customer created processor.
Output only. State of the Processor.
Output only. [Output only] The input / output specifications of a processor, each type of processor has fixed input / output specs which cannot be altered by customer.
Output only. The corresponding configuration can be used in the Application to customize the behavior of the processor.
Indicates if the processor supports post processing.
Used in:
Unspecified Processor state.
Processor is being created (not ready for use).
Processor is and ready for use.
Processor is being deleted (not ready for use).
Processor deleted or creation failed .
Type
Used in:
Processor Type UNSPECIFIED.
Processor Type PRETRAINED. Pretrained processor is developed by Vision AI App Platform with state-of-the-art vision data processing functionality, like occupancy counting or person blur. Pretrained processor is usually publicly available.
Processor Type CUSTOM. Custom processors are specialized processors which are either uploaded by customers or imported from other GCP platform (for example Vertex AI). Custom processor is only visible to the creator.
Processor Type CONNECTOR. Connector processors are special processors which perform I/O for the application, they do not processing the data but either deliver the data to other processors or receive data from other processors.
Next ID: 29
Used in:
Configs of stream input processor.
Config of AI-enabled input devices.
Configs of media warehouse processor.
Configs of person blur processor.
Configs of occupancy count processor.
Configs of Person Vehicle Detection processor.
Configs of Vertex AutoML vision processor.
Configs of Vertex AutoML video processor.
Configs of Vertex Custom processor.
Configs of General Object Detection processor.
Configs of BigQuery processor.
Configs of personal_protective_equipment_detection_config
The CloudEvent raised when a Processor is created.
The data associated with the event.
The CloudEvent raised when a Processor is deleted.
The data associated with the event.
The data within all Processor events.
Used in:
, ,Optional. The Processor event payload. Unset for deletion events.
Message describing the input / output specifications of a processor.
Used in:
For processors with input_channel_specs, the processor must be explicitly connected to another processor.
The output artifact specifications for the current processor.
The input resource that needs to be fed from the application instance.
The output resource that the processor will generate per instance. Other than the explicitly listed output bindings here, all the processors' GraphOutputChannels can be binded to stream resource. The bind name then is the same as the GraphOutputChannel's name.
Message for input channel specification.
Used in:
The name of the current input channel.
The data types of the current input channel. When this field has more than 1 value, it means this input channel can be connected to either of these different data types.
If specified, only those detailed data types can be connected to the processor. For example, jpeg stream for MEDIA, or PredictionResult proto for PROTO type. If unspecified, then any proto is accepted.
Whether the current input channel is required by the processor. For example, for a processor with required video input and optional audio input, if video input is missing, the application will be rejected while the audio input can be missing as long as the video input exists.
How many input edges can be connected to this input channel. 0 means unlimited.
Message for output channel specification.
Used in:
The name of the current output channel.
The data type of the current output channel.
Message for instance resource channel specification. External resources are virtual nodes which are not expressed in the application graph. Each processor expresses its out-graph spec, so customer is able to override the external source or destinations to the
Used in:
The configuration proto that includes the Googleapis resources. I.e. type.googleapis.com/google.cloud.vision.v1.StreamWithAnnotation
The direct type url of Googleapis resource. i.e. type.googleapis.com/google.cloud.vision.v1.Asset
Name of the input binding, unique within the processor.
Used in:
Name of the output binding, unique within the processor.
The resource type uri of the acceptable output resource.
Whether the output resource needs to be explicitly set in the instance. If it is false, the processor will automatically generate it if required.
The CloudEvent raised when a Processor is updated.
The data associated with the event.
RunMode represents the mode to launch the Process on.
Used in:
Mode is unspecified.
Live mode. Meaning the Process is launched to handle live video source, and possible packet drops are expected.
Submission mode. Meaning the Process is launched to handle bounded video files, with no packet drop. Completion status is tracked.
Message describing the status of the Process.
Used in:
The state of the Process.
The reason of becoming the state.
State represents the running status of the Process.
Used in:
State is unspecified.
INITIALIZING means the Process is scheduled but yet ready to handle real traffic.
RUNNING means the Process is up running and handling traffic.
COMPLETED means the Process has completed the processing, especially for non-streaming use case.
FAILED means the Process failed to complete the processing.
PENDING means the Process is created but yet to be scheduled.
Message describing the Series object.
Used in:
Name of the resource.
Output only. The create timestamp.
Output only. The update timestamp.
Labels as key value pairs.
Annotations to allow clients to store small amounts of arbitrary data.
Required. Stream that is associated with this series.
Required. Event that is associated with this series.
The CloudEvent raised when a Series is created.
The data associated with the event.
The CloudEvent raised when a Series is deleted.
The data associated with the event.
The data within all Series events.
Used in:
, ,Optional. The Series event payload. Unset for deletion events.
The CloudEvent raised when a Series is updated.
The data associated with the event.
Message describing the Stream object. The Stream and the Event resources are many to many; i.e., each Stream resource can associate to many Event resources and each Event resource can associate to many Stream resources.
Used in:
Name of the resource.
Output only. The create timestamp.
Output only. The update timestamp.
Labels as key value pairs.
Annotations to allow clients to store small amounts of arbitrary data.
The display name for the stream resource.
Whether to enable the HLS playback service on this stream.
The name of the media warehouse asset for long term storage of stream data. Format: projects/${p_id}/locations/${l_id}/corpora/${c_id}/assets/${a_id} Remain empty if the media warehouse storage is not needed for the stream.
message about annotations about Vision AI stream resource.
Used in:
,Annotation for type ACTIVE_ZONE
Annotation for type CROSSING_LINE
ID of the annotation. It must be unique when used in the certain context. For example, all the annotations to one input streams of a Vision AI application.
User-friendly name for the annotation.
The Vision AI stream resource name.
The actual type of Annotation.
Enum describing all possible types of a stream annotation.
Used in:
,Type UNSPECIFIED.
active_zone annotation defines a polygon on top of the content from an image/video based stream, following processing will only focus on the content inside the active zone.
crossing_line annotation defines a polyline on top of the content from an image/video based Vision AI stream, events happening across the line will be captured. For example, the counts of people who goes acroos the line in Occupancy Analytic Processor.
The CloudEvent raised when a Stream is created.
The data associated with the event.
The CloudEvent raised when a Stream is deleted.
The data associated with the event.
The data within all Stream events.
Used in:
, ,Optional. The Stream event payload. Unset for deletion events.
The CloudEvent raised when a Stream is updated.
The data associated with the event.
Message describing Vision AI stream with application specific annotations. All the StreamAnnotation object inside this message MUST have unique id.
Used in:
Vision AI Stream resource name.
Annotations that will be applied to the whole application.
Annotations that will be applied to the specific node of the application. If the same type of the annotations is applied to both application and node, the node annotation will be added in addition to the global application one. For example, if there is one active zone annotation for the whole application and one active zone annotation for the Occupancy Analytic processor, then the Occupancy Analytic processor will have two active zones defined.
Message describing annotations specific to application node.
Used in:
The node name of the application graph.
The node specific stream annotations.
Message describing VertexAutoMLVideoConfig.
Used in:
Only entities with higher score than the threshold will be returned. Value 0.0 means returns all the detected entities.
Labels specified in this field won't be returned.
At most this many predictions will be returned per output frame. Value 0 means to return all the detected entities.
Only Bounding Box whose size is larger than this limit will be returned. Object Tracking only. Value 0.0 means to return all the detected entities.
Message of configurations of Vertex AutoML Vision Processors.
Used in:
Only entities with higher score than the threshold will be returned. Value 0.0 means to return all the detected entities.
At most this many predictions will be returned per output frame. Value 0 means to return all the detected entities.
Message describing VertexCustomConfig.
Used in:
The max prediction frame per second. This attribute sets how fast the operator sends prediction requests to Vertex AI endpoint. Default value is 0, which means there is no max prediction fps limit. The operator sends prediction requests at input fps.
A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
If not empty, the prediction result will be sent to the specified cloud function for post processing. * The cloud function will receive AppPlatformCloudFunctionRequest where the annotations field will be the json format of proto PredictResponse. * The cloud function should return AppPlatformCloudFunctionResponse with PredictResponse stored in the annotations field. * To drop the prediction output, simply clear the payload field in the returned AppPlatformCloudFunctionResponse.
If true, the prediction request received by custom model will also contain metadata with the following schema: 'appPlatformMetadata': { 'ingestionTime': DOUBLE; (UNIX timestamp) 'application': STRING; 'instanceId': STRING; 'node': STRING; 'processor': STRING; }
Message describing Video Stream Input Config. This message should only be used as a placeholder for builtin:stream-input processor, actual stream binding should be specified using corresponding API.
Used in: