Get desktop application:
View/edit binary Protocol Buffers messages
Statistics for an accuracy value over multiple runs of evaluation. Next ID: 5
Used in:
Maximum value observed for any Run.
Minimum value observed for any Run.
Average value across all Runs.
Standard deviation across all Runs.
Contains parameters that define how an EvaluationStage will be executed. This would typically be validated only once during initialization, so should not contain any variables that change with each run. Next ID: 3
Specification defining what this stage does, and any required parameters.
Metrics returned from EvaluationStage.LatestMetrics() need not have all fields set.
Total number of times the EvaluationStage is run.
Process-specific numbers such as latencies, accuracy, etc.
Metrics from evaluation of the image classification task. Next ID: 5
Used in:
Not set if topk_accuracy_eval_params was not populated in ImageClassificationParams.
Parameters that define how the Image Classification task is evaluated end-to-end. Next ID: 3
Used in:
Required. TfLite model should have 1 input & 1 output tensor. Input shape: {1, image_height, image_width, 3} Output shape: {1, num_total_labels}
Optional. If not set, accuracy evaluation is not performed.
Parameters that define how images are preprocessed. Next ID: 5
Used in:
Required.
Required.
Same as tflite::TfLiteType.
Fraction for central-cropping. A central cropping-fraction of 0.875 is considered best for Inception models, hence the default value. See: https://github.com/tensorflow/tpu/blob/master/models/experimental/inception/inception_preprocessing.py#L296 Set to 0 to disable cropping.
Metrics computed from comparing TFLite execution in two settings: 1. User-defined TfliteInferenceParams (The 'test' setting) 2. Default TfliteInferenceParams (The 'reference' setting) Next ID: 4
Used in:
Latency metrics from Single-thread CPU inference.
Latency from TfliteInferenceParams under test.
For reference & test output vectors {R, T}, the error is computed as: Mean([Abs(R[i] - T[i]) for i in num_elements]) output_errors[v] : statistics for the error value of the vth output vector across all Runs.
Latency numbers in microseconds, based on all EvaluationStage::Run() calls so far. Next ID: 7
Used in:
, , ,Latency for the last Run.
Maximum latency observed for any Run.
Minimum latency observed for any Run.
Sum of all Run latencies.
Average latency across all Runs.
Standard deviation for latency across all Runs.
Average Precision metrics from Object Detection task. Next ID: 3
Used in:
,One entry for each in ObjectDetectionAveragePrecisionParams::iou_thresholds, averaged over all classes.
Average of Average Precision across all IoU thresholds.
Average Precision value for a particular IoU threshold. Next ID: 3
Used in:
Parameters that define how Average Precision is computed for Object Detection task. Refer for details: http://cocodataset.org/#detection-eval Next ID: 4
Used in:
,Total object classes. The AP value returned for each IoU threshold is an average over all classes encountered in predicted/ground truth sets.
A predicted box matches a ground truth box if and only if IoU between these two are larger than an IoU threshold. AP is computed for all relevant {IoU threshold, class} combinations and averaged to get mAP. If left empty, evaluation is done for all IoU threshods in the range 0.5:0.05:0.95 (min:increment:max).
AP is computed as the average of maximum precision at (1 + num_recall_points) recall levels. E.g., if num_recall_points is 10, recall levels are 0., 0.1, 0.2, ..., 0.9, 1.0. Default: 100
Proto containing ground-truth ObjectsSets for all images in a COCO validation set. Next ID: 2
Metrics from evaluation of the object detection task. Next ID: 5
Used in:
Parameters that define how the Object Detection task is evaluated end-to-end. Next ID: 4
Used in:
Required. Model's outputs should be same as a TFLite-compatible SSD model. Refer: https://www.tensorflow.org/lite/models/object_detection/overview#output TODO(b/133772912): Generalize support for other types of object detection models.
Optional. Used to match ground-truth categories with model output. SSD Mobilenet V1 Model trained on COCO assumes class 0 is background class in the label file and class labels start from 1 to number_of_classes+1. Therefore, default value is set as 1.
Proto containing information about all the objects (predicted or ground-truth) contained in an image. Next ID: 3
Used in:
Required for ground-truth data, to compare against inference results.
One instance of an object detected in an image. Next ID: 4
Used in:
Required.
Required
Value in (0, 1.0] denoting confidence in this prediction. Default value of 1.0 for ground-truth data.
Defines the bounding box for a detected object. Next ID: 5
Used in:
All boundaries defined below are required. Each boundary value should be normalized with respect to the image dimensions. This helps evaluate detections independent of image size. For example, normalized_top = top_boundary / image_height.
Contains process-specific metrics, which may differ based on what an EvaluationStage does. Next ID: 8
Used in:
Defines the functionality executed by an EvaluationStage. Next ID: 7
Used in:
Metrics specific to TFLite inference. Next ID: 2
Used in:
, ,Number of times the interpreter is invoked.
Parameters that control TFLite inference. Next ID: 5
Used in:
, ,Required
Number of threads available to the TFLite Interpreter.
Defines how many times the TFLite Interpreter is invoked for every input. This helps benchmark cases where extensive pre-processing might not be required for every input.
Used in:
Metrics from top-K accuracy evaluation. Next ID: 2
Used in:
,A repeated field of size |k| where the ith element denotes the fraction of samples for which the correct label was present in the top (i + 1) model outputs. For example, topk_accuracies(1) will contain the fraction of samples for which the model returned the correct label as the top first or second output.
Parameters that define how top-K accuracy is evaluated. Next ID: 2
Used in:
,Required.