Get desktop application:
View/edit binary Protocol Buffers messages
* Network backend Settings
Used in:
* input tensors settings, optional
* outputs tensor settings, optional
* inference framework
* [deprecated] TRT-IS inference framework. Use triton instead of trt_is
* Triton inference framework
* Output tensor memory type. Default: MEMORY_TYPE_DEFAULT, it is Triton preferred memory type.
* disable warmup
* Deepstream Classifciation settings
Used in:
* classifciation threshold
* custom function for classification parsing
* Custom lib for preload
Used in:
* Path point to the custom library
* Deepstream Detection settings
Used in:
* Number of classes detected by a detector network.
* Per class detection parameters. key-value is for <class_id:class_parameter>
* Name of the custom bounding box function in the custom library.
* cluster methods for bbox, choose one only
* non-maximum-suppression, reserved, not supported yet
* DbScan clustering parameters
* grouping rectagules
* simple threshold filter
* DBScan object clustering
Used in:
* Bounding box detection threshold.
float post_threshold = 2;
* Epsilon to control merging of overlapping boxes
* Minimum boxes in a cluster to be considered an object
* Minimum score in a cluster for it to be considered as an object
* cluster method based on grouping rectangles
Used in:
* detection score less this threshold would be rejected
* how many bbox can be clustered together
* Epsilon to control merging of overlapping boxes
* non-maximum-suppression cluster method
Used in:
* detection score less this threshold would be rejected
* IOU threshold
* top kth detection results to keep after nms. 0), keep all
* specific parameters controled per class
Used in:
* pre-threshold used for filter out confidence less than the value
* simple cluster method for confidence filter
Used in:
* detection score less this threshold would be rejected
* extrac controls
Used in:
* enable if need copy input tensor data for application output parsing, it's disabled by default
* defined how many buffers allocated for output tensors in the pool. Optional, default is 2, the value can be in range [2, 10+]
* custom function to create a specific processor IInferCustomProcessor. e.g. custom_process_funcion: CreateCustomProcessor
Used in:
* Inference configuration
Used in:
* unique id, larger than 0, required for multiple models inference
* gpu id settings. Optional. support single gpu only at this time default values [0]
* max batch size. Required, can be reset by plugin
* inference backend parameters. required
* preprocessing for tensors, required
* postprocessing for all tensor data, required
* Custom libs for tensor output parsing or preload, optional
* advanced controls as optional
* extrac controls
* LSTM controller
* LSTM parameters
* Clip the object bounding-box which lies outside the roi specified by nvdspreprosess plugin.
* Network Input layer information
Used in:
* input tensor name, optional
* fixed inference shape, only required when backend has wildcard shape
* tensor data type, optional. default TENSOR_DT_NONE
* Input tensor is preprocessed
Used in:
* first dims is not a batch-size
* Network LSTM Parameters
Used in:
* init constant value for lstm input tensors, usually zero or one
Used in:
* const value
* LSTM loop information
Used in:
* input tensor name
* output tensor name
* initialize input tensor for first frame
* init const value, default is zero
* enable if need keep lstm output tensor data for application output parsing, it's disabled by default
Used in:
* Tensor memory type
Used in:
* Other Network settings, need application to do postprocessing
Used in:
* reserved field
* Network Onput layer information
Used in:
* output tensor name
* set max buffer bytes for output tensor
* Plugin Control settings for input / inference / output
* Low-level libnvds_infer_server inference configuration settings
* Control plugin input buffers, object filter before inference
* Control plugin output meta data after inference
* Boudingbox filter
Used in:
,* Boudingbox minimum width
* Boudingbox minimum height
* Boudingbox maximum width
* Boudingbox maximum height
* Color values for Red/Green/Blue/Alpha, all values are in range [0, 1]
Used in:
* Red color value
* Green color value
* Blue color value
* Alpha color value
* Detection of classes filter
Used in:
* Detection Bounding box filter
* Offset of the RoI from the top of the frame. Only objects within the RoI are output
* Offset of the RoI from the bottom of the frame. Only objects within the RoI are output
* Specify border color for detection bounding boxes
* Specify background color for detection bounding boxes
* Plugin input data control policy
Used in:
* Processing mode setting, optional
* Unique ID of the GIE on whose metadata (bounding boxes) this GIE is to operate on. It is used for secondary GIE only.
* Class IDs of the parent GIE on which this GIE is to operate on. It is used for secondary GIE only.
* Specifies the number of consecutive, batches to be skipped for inference. Default is 0.
For primary inference
For secondary inferrence
* Enables inference on detected objects and asynchronous metadata attachments. Works only when tracker-id is valid. It's used for classifier with secondary GIE only.
* Input object filter policy
* input object control settings
* Input objects control
Used in:
* Input bounding box of objects filter
* Plugin output data control policy
Used in:
Enable attaching inference output tensor metadata
Postprocessing control policy
Detection results filter
Classifier type of a particular nvinferserver component.
* Output detection results control
Used in:
* Default detection classes filter
* specifies detection filters per class instead of default filter
* Processing Mode
Used in:
* Processing Default Mode
* Processing Full Frame Mode
* Processing Object Clipping Mode
* Post-processing settings
Used in:
* label file path. It relative to config file path if value is not absoluate path
* post-process can only have one of the following types
* deepstream detection parameters
* deepstream classification parameters
* deepstream segmentation parameters
* deepstream other postprocessing parameters
* [deprecated] TRT-IS classification parameters
* Triton classification parameters, replacing trtis_classification
* preprocessing settings
Used in:
* Network input format
* Network input tensor order
* preprocessing data set to network tensor name
* Indicating if aspect ratio should be maintained when scaling to network resolution. Right/bottom areas will be filled with black areas.
* Compute hardware to use for scaling frames / objects.
* Interpolation filter to use while scaling. Refer to NvBufSurfTransform_Inter for supported filter values.
* Preprocessing methods
* usual scaling normalization for images
* Indicating if symmetric padding should be used or not while scaling to network resolution. Bottom-right padding is used by default.
* Input data normalization settings
Used in:
* Normalization factor to scale the input pixels with.
* Per channel offsets for mean subtraction. This is an alternative to the mean image file. The number of offsets in the array should be exactly equalto the number of input channels.
* Path to the mean image file (PPM format). Resolution of the file should be equal to the network input resolution.
* Deepstream segmentation settings
Used in:
* Segmentation threshold
* Number of classes detected by the segmentation network.
* Custom function for parsing segmentation output
Used in:
Used in:
* Triton classifcation settings
Used in:
,* top k classification results
* classifciation threshold
* [optional] specify which output tensor is used for triton classification.
Used in:
* Enable sharing of input CUDA buffers with local Triton server. If enabled, the input CUDA buffers are shared with the Triton server to improve performance. This feature should be enabled only when the Triton server is on the same machine. Applicable for x86 dGPU platform, not supported on Jetson devices. By default disabled, CUDA buffers are copied to system memory while creating the inference request
* Triton models repo settings
Used in:
* root directory for all models All models should set same @a root value
* log verbose level, the larger the more logs output (0): ERROR; (1): WARNING; (2): INFO (3+): VERBOSE Level
* enable strict model config true: config.pbtxt must exsit. false: Triton try to figure model's config file, it may cause failure on different input/output dims.
* tensorflow gpu memory fraction, default 0.0
* tensorflow soft placement, allowed by default
* minimum compute capacity, dGPU: default 6.0; Jetson: default 5.3.
* triton backends directory
* triton model control mode, select from "none": load all models in 'root' repo at startup once. "explicit": load/unload models by 'TritonParams'. If value is empty, will use "explicit" by default.
* Triton server reserved cuda memory size for each device. If device not added, will use Triton runtime's default memory size 256MB. If \a cuda_memory_pool_byte_size is set 0, plugin will not reserve cuda memory on that device.
* Triton server reserved pinned memory size during initialization
* Triton backend config settings
Used in:
* backend name
* backend setting name
* backend setting value
* Cuda Memory settings for GPU device
Used in:
* GPU device id
* Cuda Memory Pool byte size
* Triton inference backend parameters
Used in:
* trt-is model name
* model version, -1 is for latest version, required
* Triton classifications, optional
* trt-is server model repo, all models must have same @a model_repo