package google.cloud.ml.v1

Mouse Melon logoGet desktop application:
View/edit binary Protocol Buffers messages

service JobService

job_service.proto:35

Service to create and manage training and batch prediction jobs.

service ModelService

model_service.proto:61

Provides methods that create and manage machine learning models and their versions. A model in this context is a container for versions. The model can't provide predictions without first having a version created for it. Each version is a trained machine learning model, and each is assumed to be an iteration of the same machine learning problem as the other versions of the same model. Your project can define multiple models, each with multiple versions. The basic life cycle of a model is: * Create and train the machine learning model and save it to a Google Cloud Storage location. * Use [projects.models.create](/ml/reference/rest/v1/projects.models/create) to make a new model in your project. * Use [projects.models.versions.create](/ml/reference/rest/v1/projects.models.versions/create) to deploy your saved model. * Use [projects.predict](/ml/reference/rest/v1/projects/predict to request predictions of a version of your model, or use [projects.jobs.create](/ml/reference/rest/v1/projects.jobs/create) to start a batch prediction job.

service OnlinePredictionService

prediction_service.proto:34

The Prediction API, which serves predictions for models managed by ModelService.

service ProjectManagementService

project_service.proto:32

Allows retrieving project related information.

message HyperparameterOutput

job_service.proto:367

Represents the result of a single hyperparameter tuning trial from a training job. The TrainingOutput object that is returned on successful completion of a training job with hyperparameter tuning includes a list of HyperparameterOutput objects, one for each successful trial.

Used in: TrainingOutput

message HyperparameterOutput.HyperparameterMetric

job_service.proto:369

An observed value of a metric.

Used in: HyperparameterOutput

message HyperparameterSpec

job_service.proto:238

Represents a set of hyperparameters to optimize.

Used in: TrainingInput

enum HyperparameterSpec.GoalType

job_service.proto:240

The available types of optimization goals.

Used in: HyperparameterSpec

message Job

job_service.proto:486

Represents a training or prediction job.

Used as response type in: JobService.CreateJob, JobService.GetJob

Used as field type in: CreateJobRequest, ListJobsResponse

enum Job.State

job_service.proto:488

Describes the job state.

Used in: Job

message ManualScaling

model_service.proto:258

Options for manually scaling a model.

Used in: Version

message Model

model_service.proto:178

Represents a machine learning solution. A model can have multiple versions, each of which is a deployed, trained model ready to receive prediction requests. The model itself is just a container.

Used as response type in: ModelService.CreateModel, ModelService.GetModel

Used as field type in: CreateModelRequest, ListModelsResponse

message OperationMetadata

operation_metadata.proto:34

Represents the metadata of the long-running operation.

enum OperationMetadata.OperationType

operation_metadata.proto:36

The operation type.

Used in: OperationMetadata

message ParameterSpec

job_service.proto:287

Represents a single hyperparameter to optimize.

Used in: HyperparameterSpec

enum ParameterSpec.ParameterType

job_service.proto:289

The type of the parameter.

Used in: ParameterSpec

enum ParameterSpec.ScaleType

job_service.proto:311

The type of scaling that should be applied to this parameter.

Used in: ParameterSpec

message PredictionInput

job_service.proto:408

Represents input parameters for a prediction job.

Used in: Job

enum PredictionInput.DataFormat

job_service.proto:410

The format used to separate data instances in the source files.

Used in: PredictionInput

message PredictionOutput

job_service.proto:471

Represents results of a prediction job.

Used in: Job

message TrainingInput

job_service.proto:68

Represents input parameters for a training job.

Used in: Job

enum TrainingInput.ScaleTier

job_service.proto:80

A scale tier is an abstract representation of the resources Cloud ML will allocate to a training job. When selecting a scale tier for your training job, you should consider the size of your training dataset and the complexity of your model. As the tiers increase, virtual machines are added to handle your job, and the individual machines in the cluster generally have more memory and greater processing power than they do at lower tiers. The number of training units charged per hour of processing increases as tiers get more advanced. Refer to the [pricing guide](/ml/pricing) for more details. Note that in addition to incurring costs, your use of training resources is constrained by the [quota policy](/ml/quota).

Used in: TrainingInput

message TrainingOutput

job_service.proto:391

Represents results of a training job. Output only.

Used in: Job

message Version

model_service.proto:210

Represents a version of the model. Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling [projects.models.versions.list](/ml/reference/rest/v1/projects.models.versions/list).

Used as response type in: ModelService.GetVersion, ModelService.SetDefaultVersion

Used as field type in: CreateVersionRequest, ListVersionsResponse, Model, OperationMetadata