Get desktop application:
View/edit binary Protocol Buffers messages
open source marker; do not remove PredictionService provides access to machine-learned models loaded by model_servers.
Predict -- provides access to loaded TensorFlow model.
PredictRequest specifies which TensorFlow model to run, as well as how inputs are mapped to tensors and how outputs are filtered before returning to user.
Model Specification. If version is not specified, will use the latest (numerical) version.
Input tensors. Names of input tensor are alias names. The mapping from aliases to real input tensor names is stored in the SavedModel export as a prediction SignatureDef under the 'inputs' field.
Output filter. Names specified are alias names. The mapping from aliases to real output tensor names is stored in the SavedModel export as a prediction SignatureDef under the 'outputs' field. Only tensors specified here will be run/fetched and returned, with the exception that when none is specified, all tensors specified in the named signature will be run/fetched and returned.
Response for PredictRequest on successful run.
Effective Model Specification used to process PredictRequest.
Output tensors.
Metadata for an inference request such as the model name and version.
Used in:
,Required servable name.
Optional choice of which version of the model to use. Recommended to be left unset in the common case. Should be specified only when there is a strong version consistency requirement. When left unspecified, the system will serve the best available version. This is typically the latest version, though during version transitions, notably when serving on a fleet of instances, may be either the previous or new version.
Use this specific version number.
Use the version associated with the given label.
A named signature to evaluate. If unspecified, the default signature will be used.