Get desktop application:
View/edit binary Protocol Buffers messages
PredictionService provides access to machine-learned models loaded by model_servers.
Predict -- provides access to loaded TensorFlow model.
PredictRequest specifies which TensorFlow model to run, as well as how inputs are mapped to tensors and how outputs are filtered before returning to user.
Model Specification.
Input tensors. Names of input tensor are alias names. The mapping from aliases to real input tensor names is expected to be stored as named generic signature under the key "inputs" in the model export. Each alias listed in a generic signature named "inputs" should be provided exactly once in order to run the prediction.
Output filter. Names specified are alias names. The mapping from aliases to real output tensor names is expected to be stored as named generic signature under the key "outputs" in the model export. Only tensors specified here will be run/fetched and returned, with the exception that when none is specified, all tensors specified in the named signature will be run/fetched and returned.
Response for PredictRequest on successful run.
Output tensors.
Metadata for an inference request such as the model name and version.
Used in:
Required servable name.
Optional version. If unspecified, will use the latest (numerical) version. Typically not needed unless coordinating across multiple models that were co-trained and/or have inter-dependencies on the versions used at inference time.
A named signature to evaluate. If unspecified, the default signature will be used. Note that only MultiInference will initially support this.