Get desktop application:
View/edit binary Protocol Buffers messages
Service to create and manage training and batch prediction jobs.
Creates a training or a batch prediction job.
Request message for the CreateJob method.
Required. The project name. Authorization: requires `Editor` role on the specified project.
Required. The job to create.
Lists the jobs in the project.
Request message for the ListJobs method.
Required. The name of the project for which to list jobs. Authorization: requires `Viewer` role on the specified project.
Optional. Specifies the subset of jobs to retrieve.
Optional. A page token to request the next page of results. You get the token from the `next_page_token` field of the response from the previous call.
Optional. The number of jobs to retrieve per "page" of results. If there are more remaining results than this number, the response message will contain a valid value in the `next_page_token` field. The default value is 20, and the maximum page size is 100.
Response message for the ListJobs method.
The list of jobs.
Optional. Pass this token as the `page_token` field of the request for a subsequent call.
Describes a job.
Request message for the GetJob method.
Required. The name of the job to get the description of. Authorization: requires `Viewer` role on the parent project.
Cancels a running job.
Request message for the CancelJob method.
Required. The name of the job to cancel. Authorization: requires `Editor` role on the parent project.
Provides methods that create and manage machine learning models and their versions. A model in this context is a container for versions. The model can't provide predictions without first having a version created for it. Each version is a trained machine learning model, and each is assumed to be an iteration of the same machine learning problem as the other versions of the same model. Your project can define multiple models, each with multiple versions. The basic life cycle of a model is: * Create and train the machine learning model and save it to a Google Cloud Storage location. * Use [projects.models.create](/ml/reference/rest/v1/projects.models/create) to make a new model in your project. * Use [projects.models.versions.create](/ml/reference/rest/v1/projects.models.versions/create) to deploy your saved model. * Use [projects.predict](/ml/reference/rest/v1/projects/predict to request predictions of a version of your model, or use [projects.jobs.create](/ml/reference/rest/v1/projects.jobs/create) to start a batch prediction job.
Creates a model which will later contain one or more versions. You must add at least one version before you can request predictions from the model. Add versions by calling [projects.models.versions.create](/ml/reference/rest/v1/projects.models.versions/create).
Request message for the CreateModel method.
Required. The project name. Authorization: requires `Editor` role on the specified project.
Required. The model to create.
Lists the models in a project. Each project can contain multiple models, and each model can have multiple versions.
Request message for the ListModels method.
Required. The name of the project whose models are to be listed. Authorization: requires `Viewer` role on the specified project.
Optional. A page token to request the next page of results. You get the token from the `next_page_token` field of the response from the previous call.
Optional. The number of models to retrieve per "page" of results. If there are more remaining results than this number, the response message will contain a valid value in the `next_page_token` field. The default value is 20, and the maximum page size is 100.
Response message for the ListModels method.
The list of models.
Optional. Pass this token as the `page_token` field of the request for a subsequent call.
Gets information about a model, including its name, the description (if set), and the default version (if at least one version of the model has been deployed).
Request message for the GetModel method.
Required. The name of the model. Authorization: requires `Viewer` role on the parent project.
Deletes a model. You can only delete a model if there are no versions in it. You can delete versions by calling [projects.models.versions.delete](/ml/reference/rest/v1/projects.models.versions/delete).
Request message for the DeleteModel method.
Required. The name of the model. Authorization: requires `Editor` role on the parent project.
Creates a new version of a model from a trained TensorFlow model. If the version created in the cloud by this call is the first deployed version of the specified model, it will be made the default version of the model. When you add a version to a model that already has one or more versions, the default version does not automatically change. If you want a new version to be the default, you must call [projects.models.versions.setDefault](/ml/reference/rest/v1/projects.models.versions/setDefault).
Uploads the provided trained model version to Cloud Machine Learning.
Required. The name of the model. Authorization: requires `Editor` role on the parent project.
Required. The version details.
Gets basic information about all the versions of a model. If you expect that a model has a lot of versions, or if you need to handle only a limited number of results at a time, you can request that the list be retrieved in batches (called pages):
Request message for the ListVersions method.
Required. The name of the model for which to list the version. Authorization: requires `Viewer` role on the parent project.
Optional. A page token to request the next page of results. You get the token from the `next_page_token` field of the response from the previous call.
Optional. The number of versions to retrieve per "page" of results. If there are more remaining results than this number, the response message will contain a valid value in the `next_page_token` field. The default value is 20, and the maximum page size is 100.
Response message for the ListVersions method.
The list of versions.
Optional. Pass this token as the `page_token` field of the request for a subsequent call.
Gets information about a model version. Models can have multiple versions. You can call [projects.models.versions.list](/ml/reference/rest/v1/projects.models.versions/list) to get the same information that this method returns for all of the versions of a model.
Request message for the GetVersion method.
Required. The name of the version. Authorization: requires `Viewer` role on the parent project.
Deletes a model version. Each model can have multiple versions deployed and in use at any given time. Use this method to remove a single version. Note: You cannot delete the version that is set as the default version of the model unless it is the only remaining version.
Request message for the DeleteVerionRequest method.
Required. The name of the version. You can get the names of all the versions of a model by calling [projects.models.versions.list](/ml/reference/rest/v1/projects.models.versions/list). Authorization: requires `Editor` role on the parent project.
Designates a version to be the default for the model. The default version is used for prediction requests made against the model that don't specify a version. The first version to be created for a model is automatically set as the default. You must make any subsequent changes to the default version setting manually using this method.
Request message for the SetDefaultVersion request.
Required. The name of the version to make the default for the model. You can get the names of all the versions of a model by calling [projects.models.versions.list](/ml/reference/rest/v1/projects.models.versions/list). Authorization: requires `Editor` role on the parent project.
The Prediction API, which serves predictions for models managed by ModelService.
Performs prediction on the data in the request. **** REMOVE FROM GENERATED DOCUMENTATION
Request for predictions to be issued against a trained model. The body of the request is a single JSON object with a single top-level field: <dl> <dt>instances</dt> <dd>A JSON array containing values representing the instances to use for prediction.</dd> </dl> The structure of each element of the instances list is determined by your model's input definition. Instances can include named inputs or can contain only unlabeled values. Not all data includes named inputs. Some instances will be simple JSON values (boolean, number, or string). However, instances are often lists of simple values, or complex nested lists. Here are some examples of request bodies: CSV data with each row encoded as a string value: <pre> {"instances": ["1.0,true,\\"x\\"", "-2.0,false,\\"y\\""]} </pre> Plain text: <pre> {"instances": ["the quick brown fox", "la bruja le dio"]} </pre> Sentences encoded as lists of words (vectors of strings): <pre> { "instances": [ ["the","quick","brown"], ["la","bruja","le"], ... ] } </pre> Floating point scalar values: <pre> {"instances": [0.0, 1.1, 2.2]} </pre> Vectors of integers: <pre> { "instances": [ [0, 1, 2], [3, 4, 5], ... ] } </pre> Tensors (in this case, two-dimensional tensors): <pre> { "instances": [ [ [0, 1, 2], [3, 4, 5] ], ... ] } </pre> Images can be represented different ways. In this encoding scheme the first two dimensions represent the rows and columns of the image, and the third contains lists (vectors) of the R, G, and B values for each pixel. <pre> { "instances": [ [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ], ... ] } </pre> JSON strings must be encoded as UTF-8. To send binary data, you must base64-encode the data and mark it as binary. To mark a JSON string as binary, replace it with a JSON object with a single attribute named `b64`: <pre>{"b64": "..."} </pre> For example: Two Serialized tf.Examples (fake data, for illustrative purposes only): <pre> {"instances": [{"b64": "X5ad6u"}, {"b64": "IA9j4nx"}]} </pre> Two JPEG image byte strings (fake data, for illustrative purposes only): <pre> {"instances": [{"b64": "ASa8asdf"}, {"b64": "JLK7ljk3"}]} </pre> If your data includes named references, format each instance as a JSON object with the named references as the keys: JSON input data to be preprocessed: <pre> { "instances": [ { "a": 1.0, "b": true, "c": "x" }, { "a": -2.0, "b": false, "c": "y" } ] } </pre> Some models have an underlying TensorFlow graph that accepts multiple input tensors. In this case, you should use the names of JSON name/value pairs to identify the input tensors, as shown in the following exmaples: For a graph with input tensor aliases "tag" (string) and "image" (base64-encoded string): <pre> { "instances": [ { "tag": "beach", "image": {"b64": "ASa8asdf"} }, { "tag": "car", "image": {"b64": "JLK7ljk3"} } ] } </pre> For a graph with input tensor aliases "tag" (string) and "image" (3-dimensional array of 8-bit ints): <pre> { "instances": [ { "tag": "beach", "image": [ [ [138, 30, 66], [130, 20, 56], ... ], [ [126, 38, 61], [122, 24, 57], ... ], ... ] }, { "tag": "car", "image": [ [ [255, 0, 102], [255, 0, 97], ... ], [ [254, 1, 101], [254, 2, 93], ... ], ... ] }, ... ] } </pre> If the call is successful, the response body will contain one prediction entry per instance in the request body. If prediction fails for any instance, the response body will contain no predictions and will contian a single error entry instead.
Required. The resource name of a model or a version. Authorization: requires `Viewer` role on the parent project.
Required. The prediction request body.
Allows retrieving project related information.
Get the service account information associated with your project. You need this information in order to grant the service account persmissions for the Google Cloud Storage location where you put your model training code for training the model with Google Cloud Machine Learning.
Requests service account information associated with a project.
Required. The project name. Authorization: requires `Viewer` role on the specified project.
Returns service account information associated with a project.
The service account Cloud ML uses to access resources in the project.
The project number for `service_account`.
Represents the result of a single hyperparameter tuning trial from a training job. The TrainingOutput object that is returned on successful completion of a training job with hyperparameter tuning includes a list of HyperparameterOutput objects, one for each successful trial.
Used in:
The trial id for these results.
The hyperparameters given to this trial.
The final objective metric seen for this trial.
All recorded object metrics for this trial.
An observed value of a metric.
Used in:
The global training step for this metric.
The objective value at this training step.
Represents a set of hyperparameters to optimize.
Used in:
Required. The type of goal to use for tuning. Available types are `MAXIMIZE` and `MINIMIZE`. Defaults to `MAXIMIZE`.
Required. The set of parameters to tune.
Optional. How many training trials should be attempted to optimize the specified hyperparameters. Defaults to one.
Optional. The number of training trials to run concurrently. You can reduce the time it takes to perform hyperparameter tuning by adding trials in parallel. However, each trail only benefits from the information gained in completed trials. That means that a trial does not get access to the results of trials running at the same time, which could reduce the quality of the overall optimization. Each trial will use the same scale tier and machine types. Defaults to one.
Optional. The Tensorflow summary tag name to use for optimizing trials. For current versions of Tensorflow, this tag name should exactly match what is shown in Tensorboard, including all scopes. For versions of Tensorflow prior to 0.12, this should be only the tag passed to tf.Summary. By default, "training/hptuning/metric" will be used.
The available types of optimization goals.
Used in:
Goal Type will default to maximize.
Maximize the goal metric.
Minimize the goal metric.
Represents a training or prediction job.
Used as response type in: JobService.CreateJob, JobService.GetJob
Used as field type in:
,Required. The user-specified id of the job.
Required. Parameters to create a job.
Input parameters to create a training job.
Input parameters to create a prediction job.
Output only. When the job was created.
Output only. When the job processing was started.
Output only. When the job processing was completed.
Output only. The detailed state of a job.
Output only. The details of a failure or a cancellation.
Output only. The current result of the job.
The current training job result.
The current prediction job result.
Describes the job state.
Used in:
The job state is unspecified.
The job has been just created and processing has not yet begun.
The service is preparing to run the job.
The job is in progress.
The job completed successfully.
The job failed. `error_message` should contain the details of the failure.
The job is being cancelled. `error_message` should describe the reason for the cancellation.
The job has been cancelled. `error_message` should describe the reason for the cancellation.
Options for manually scaling a model.
Used in:
The number of nodes to allocate for this model. These nodes are always up, starting from the time the model is deployed, so the cost of operating this model will be proportional to nodes * number of hours since deployment.
Represents a machine learning solution. A model can have multiple versions, each of which is a deployed, trained model ready to receive prediction requests. The model itself is just a container.
Used as response type in: ModelService.CreateModel, ModelService.GetModel
Used as field type in:
,Required. The name specified for the model when it was created. The model name must be unique within the project it is created in.
Optional. The description specified for the model when it was created.
Output only. The default version of the model. This version will be used to handle prediction requests that do not specify a version. You can change the default version by calling [projects.methods.versions.setDefault](/ml/reference/rest/v1/projects.models.versions/setDefault).
Optional. The list of regions where the model is going to be deployed. Currently only one region per model is supported. Defaults to 'us-central1' if nothing is set.
Optional. If true, enables StackDriver Logging for online prediction. Default is false.
Represents the metadata of the long-running operation.
The time the operation was submitted.
The time operation processing started.
The time operation processing completed.
Indicates whether a request to cancel this operation has been made.
The operation type.
Contains the name of the model associated with the operation.
Contains the version associated with the operation.
The operation type.
Used in:
Unspecified operation type.
An operation to create a new version.
An operation to delete an existing version.
An operation to delete an existing model.
Represents a single hyperparameter to optimize.
Used in:
Required. The parameter name must be unique amongst all ParameterConfigs in a HyperparameterSpec message. E.g., "learning_rate".
Required. The type of the parameter.
Required if type is `DOUBLE` or `INTEGER`. This field should be unset if type is `CATEGORICAL`. This value should be integers if type is INTEGER.
Required if typeis `DOUBLE` or `INTEGER`. This field should be unset if type is `CATEGORICAL`. This value should be integers if type is `INTEGER`.
Required if type is `CATEGORICAL`. The list of possible categories.
Required if type is `DISCRETE`. A list of feasible points. The list should be in strictly increasing order. For instance, this parameter might have possible settings of 1.5, 2.5, and 4.0. This list should not contain more than 1,000 values.
Optional. How the parameter should be scaled to the hypercube. Leave unset for categorical parameters. Some kind of scaling is strongly recommended for real or integral parameters (e.g., `UNIT_LINEAR_SCALE`).
The type of the parameter.
Used in:
You must specify a valid type. Using this unspecified type will result in an error.
Type for real-valued parameters.
Type for integral parameters.
The parameter is categorical, with a value chosen from the categories field.
The parameter is real valued, with a fixed set of feasible points. If `type==DISCRETE`, feasible_points must be provided, and {`min_value`, `max_value`} will be ignored.
The type of scaling that should be applied to this parameter.
Used in:
By default, no scaling is applied.
Scales the feasible space to (0, 1) linearly.
Scales the feasible space logarithmically to (0, 1). The entire feasible space must be strictly positive.
Scales the feasible space "reverse" logarithmically to (0, 1). The result is that values close to the top of the feasible space are spread out more than points near the bottom. The entire feasible space must be strictly positive.
Represents input parameters for a prediction job.
Used in:
Required. The model or the version to use for prediction.
Use this field if you want to use the default version for the specified model. The string must use the following format: `"projects/<var>[YOUR_PROJECT]</var>/models/<var>[YOUR_MODEL]</var>"`
Use this field if you want to specify a version of the model to use. The string is formatted the same way as `model_version`, with the addition of the version information: `"projects/<var>[YOUR_PROJECT]</var>/models/<var>YOUR_MODEL/versions/<var>[YOUR_VERSION]</var>"`
Use this field if you want to specify a Google Cloud Storage path for the model to use.
Required. The format of the input data files.
Required. The Google Cloud Storage location of the input data files. May contain wildcards.
Required. The output Google Cloud Storage location.
Optional. The maximum number of workers to be used for parallel processing. Defaults to 10 if not specified.
Required. The Google Compute Engine region to run the prediction job in.
Optional. The Google Cloud ML runtime version to use for this batch prediction. If not set, Google Cloud ML will pick the runtime version used during the CreateVersion request for this model version, or choose the latest stable version when model version information is not available such as when the model is specified by uri.
The format used to separate data instances in the source files.
Used in:
Unspecified format.
The source file is a text file with instances separated by the new-line character.
The source file is a TFRecord file.
The source file is a GZIP-compressed TFRecord file.
Represents results of a prediction job.
Used in:
The output Google Cloud Storage location provided at the job creation time.
The number of generated predictions.
The number of data instances which resulted in errors.
Node hours used by the batch prediction job.
Represents input parameters for a training job.
Used in:
Required. Specifies the machine types, the number of replicas for workers and parameter servers.
Optional. Specifies the type of virtual machine to use for your training job's master worker. The following types are supported: <dl> <dt>standard</dt> <dd> A basic machine configuration suitable for training simple models with small to moderate datasets. </dd> <dt>large_model</dt> <dd> A machine with a lot of memory, specially suited for parameter servers when your model is large (having many hidden layers or layers with very large numbers of nodes). </dd> <dt>complex_model_s</dt> <dd> A machine suitable for the master and workers of the cluster when your model requires more computation than the standard machine can handle satisfactorily. </dd> <dt>complex_model_m</dt> <dd> A machine with roughly twice the number of cores and roughly double the memory of <code suppresswarning="true">complex_model_s</code>. </dd> <dt>complex_model_l</dt> <dd> A machine with roughly twice the number of cores and roughly double the memory of <code suppresswarning="true">complex_model_m</code>. </dd> <dt>standard_gpu</dt> <dd> A machine equivalent to <code suppresswarning="true">standard</code> that also includes a <a href="ml/docs/how-tos/using-gpus"> GPU that you can use in your trainer</a>. </dd> <dt>complex_model_m_gpu</dt> <dd> A machine equivalent to <code suppresswarning="true">coplex_model_m</code> that also includes four GPUs. </dd> </dl> You must set this value when `scaleTier` is set to `CUSTOM`.
Optional. Specifies the type of virtual machine to use for your training job's worker nodes. The supported values are the same as those described in the entry for `masterType`. This value must be present when `scaleTier` is set to `CUSTOM` and `workerCount` is greater than zero.
Optional. Specifies the type of virtual machine to use for your training job's parameter server. The supported values are the same as those described in the entry for `master_type`. This value must be present when `scaleTier` is set to `CUSTOM` and `parameter_server_count` is greater than zero.
Optional. The number of worker replicas to use for the training job. Each replica in the cluster will be of the type specified in `worker_type`. This value can only be used when `scale_tier` is set to `CUSTOM`. If you set this value, you must also set `worker_type`.
Optional. The number of parameter server replicas to use for the training job. Each replica in the cluster will be of the type specified in `parameter_server_type`. This value can only be used when `scale_tier` is set to `CUSTOM`.If you set this value, you must also set `parameter_server_type`.
Required. The Google Cloud Storage location of the packages with the training program and any additional dependencies.
Required. The Python module name to run after installing the packages.
Optional. Command line arguments to pass to the program.
Optional. The set of Hyperparameters to tune.
Required. The Google Compute Engine region to run the training job in.
Optional. A Google Cloud Storage path in which to store training outputs and other data needed for training. This path is passed to your TensorFlow program as the 'job_dir' command-line argument. The benefit of specifying this field is that Cloud ML validates the path for use in training.
Optional. The Google Cloud ML runtime version to use for training. If not set, Google Cloud ML will choose the latest stable version.
A scale tier is an abstract representation of the resources Cloud ML will allocate to a training job. When selecting a scale tier for your training job, you should consider the size of your training dataset and the complexity of your model. As the tiers increase, virtual machines are added to handle your job, and the individual machines in the cluster generally have more memory and greater processing power than they do at lower tiers. The number of training units charged per hour of processing increases as tiers get more advanced. Refer to the [pricing guide](/ml/pricing) for more details. Note that in addition to incurring costs, your use of training resources is constrained by the [quota policy](/ml/quota).
Used in:
A single worker instance. This tier is suitable for learning how to use Cloud ML, and for experimenting with new models using small datasets.
Many workers and a few parameter servers.
A large number of workers with many parameter servers.
A single worker instance [with a GPU](ml/docs/how-tos/using-gpus).
The CUSTOM tier is not a set tier, but rather enables you to use your own cluster specification. When you use this tier, set values to configure your processing cluster according to these guidelines: * You _must_ set `TrainingInput.masterType` to specify the type of machine to use for your master node. This is the only required setting. * You _may_ set `TrainingInput.workerCount` to specify the number of workers to use. If you specify one or more workers, you _must_ also set `TrainingInput.workerType` to specify the type of machine to use for your worker nodes. * You _may_ set `TrainingInput.parameterServerCount` to specify the number of parameter servers to use. If you specify one or more parameter servers, you _must_ also set `TrainingInput.parameterServerType` to specify the type of machine to use for your parameter servers. Note that all of your workers must use the same machine type, which can be different from your parameter server type and master type. Your parameter servers must likewise use the same machine type, which can be different from your worker type and master type.
Represents results of a training job. Output only.
Used in:
The number of hyperparameter tuning trials that completed successfully. Only set for hyperparameter tuning jobs.
Results for individual Hyperparameter trials. Only set for hyperparameter tuning jobs.
The amount of ML units consumed by the job.
Whether this job is a hyperparameter tuning job.
Represents a version of the model. Each version is a trained model deployed in the cloud, ready to handle prediction requests. A model can have multiple versions. You can get information about all of the versions of a given model by calling [projects.models.versions.list](/ml/reference/rest/v1/projects.models.versions/list).
Used as response type in: ModelService.GetVersion, ModelService.SetDefaultVersion
Used as field type in:
, , ,Required.The name specified for the version when it was created. The version name must be unique within the model it is created in.
Optional. The description specified for the version when it was created.
Output only. If true, this version will be used to handle prediction requests that do not specify a version. You can change the default version by calling [projects.methods.versions.setDefault](/ml/reference/rest/v1/projects.models.versions/setDefault).
Required. The Google Cloud Storage location of the trained model used to create the version. See the [overview of model deployment](/ml/docs/concepts/deployment-overview) for more informaiton. When passing Version to [projects.models.versions.create](/ml/reference/rest/v1/projects.models.versions/create) the model service uses the specified location as the source of the model. Once deployed, the model version is hosted by the prediction service, so this location is useful only as a historical record.
Output only. The time the version was created.
Output only. The time the version was last used for prediction.
Optional. The Google Cloud ML runtime version to use for this deployment. If not set, Google Cloud ML will choose a version.
Optional. Manually select the number of nodes to use for serving the model. If unset (i.e., by default), the number of nodes used to serve the model automatically scales with traffic. However, care should be taken to ramp up traffic according to the model's ability to scale. If your model needs to handle bursts of traffic beyond it's ability to scale, it is recommended you set this field appropriately.