package google.cloud.aiplatform.v1

Mouse Melon logoGet desktop application:
View/edit binary Protocol Buffers messages

service DatasetService

dataset_service.proto:44

The service that manages Vertex AI Dataset and its child resources.

service DeploymentResourcePoolService

deployment_resource_pool_service.proto:40

A service that manages the DeploymentResourcePool resource.

service EndpointService

endpoint_service.proto:38

A service for managing Vertex AI's Endpoints.

service EvaluationService

evaluation_service.proto:33

Vertex AI Online Evaluation Service.

service FeatureOnlineStoreAdminService

feature_online_store_admin_service.proto:41

The service that handles CRUD and List for resources for FeatureOnlineStore.

service FeatureOnlineStoreService

feature_online_store_service.proto:35

A service for fetching feature values from the online store.

service FeatureRegistryService

feature_registry_service.proto:41

The service that handles CRUD and List for resources for FeatureRegistry.

service FeaturestoreOnlineServingService

featurestore_online_service.proto:36

A service for serving online feature values.

service FeaturestoreService

featurestore_service.proto:44

The service that handles CRUD and List for resources for Featurestore.

service GenAiCacheService

gen_ai_cache_service.proto:36

Service for managing Vertex AI's CachedContent resource.

service GenAiTuningService

genai_tuning_service.proto:38

A service for creating and managing GenAI Tuning Jobs.

service IndexEndpointService

index_endpoint_service.proto:38

A service for managing Vertex AI's IndexEndpoints.

service IndexService

index_service.proto:38

A service for creating and managing Vertex AI's Index resources.

service JobService

job_service.proto:44

A service for creating and managing Vertex AI's jobs.

service LlmUtilityService

llm_utility_service.proto:36

Service for LLM related utility functions.

service MatchService

match_service.proto:35

MatchService is a Google managed service for efficient vector similarity search at scale.

service MetadataService

metadata_service.proto:44

Service for reading and writing metadata entries.

service MigrationService

migration_service.proto:38

A service that migrates resources from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.

service ModelGardenService

model_garden_service.proto:34

The interface of Model Garden Service.

service ModelService

model_service.proto:44

A service for managing Vertex AI's machine learning Models.

service NotebookService

notebook_service.proto:39

The interface for Vertex Notebook service (a.k.a. Colab on Workbench).

service PersistentResourceService

persistent_resource_service.proto:38

A service for managing Vertex AI's machine learning PersistentResource.

service PipelineService

pipeline_service.proto:41

A service for creating and managing Vertex AI's pipelines. This includes both `TrainingPipeline` resources (used for AutoML and custom training) and `PipelineJob` resources (used for Vertex AI Pipelines).

service PredictionService

prediction_service.proto:40

A service for online predictions and explanations.

service ReasoningEngineExecutionService

reasoning_engine_execution_service.proto:35

A service for executing queries on Reasoning Engine.

service ReasoningEngineService

reasoning_engine_service.proto:38

A service for managing Vertex AI's Reasoning Engines.

service ScheduleService

schedule_service.proto:39

A service for creating and managing Vertex AI's Schedule resources to periodically launch shceudled runs to make API calls.

service SpecialistPoolService

specialist_pool_service.proto:43

A service for creating and managing Customer SpecialistPools. When customers start Data Labeling jobs, they can reuse/create Specialist Pools to bring their own Specialists to label the data. Customers can add/remove Managers for the Specialist Pool on Cloud console, then Managers will get email notifications to manage Specialists and tasks on CrowdCompute console.

service TensorboardService

tensorboard_service.proto:42

TensorboardService

service VertexRagDataService

vertex_rag_data_service.proto:38

A service for managing user data for RAG.

service VertexRagService

vertex_rag_service.proto:36

A service for retrieving relevant contexts.

service VizierService

vizier_service.proto:42

Vertex AI Vizier API. Vertex AI Vizier is a service to solve blackbox optimization problems, such as tuning machine learning hyperparameters and searching over deep learning architectures.

enum AcceleratorType

accelerator_type.proto:29

LINT: LEGACY_NAMES Represents a hardware accelerator type.

Used in: MachineSpec

message ActiveLearningConfig

data_labeling_job.proto:149

Parameters that configure the active learning pipeline. Active learning will label the data incrementally by several iterations. For every iteration, it will select a batch of data based on the sampling strategy.

Used in: DataLabelingJob

message Annotation

annotation.proto:35

Used to assign specific AnnotationSpec to a particular area of a DataItem or the whole part of the DataItem.

Used in: DataItemView, ListAnnotationsResponse

message ApiAuth

api_auth.proto:35

The generic reusable api auth config.

Used in: RagVectorDbConfig

message ApiAuth.ApiKeyConfig

api_auth.proto:37

The API secret.

Used in: ApiAuth, JiraSource.JiraQueries, SharePointSources.SharePointSource, SlackSource.SlackChannels

message Artifact

artifact.proto:33

Instance of a general artifact.

Used as response type in: MetadataService.CreateArtifact, MetadataService.GetArtifact, MetadataService.UpdateArtifact

Used as field type in: CreateArtifactRequest, LineageSubgraph, ListArtifactsResponse, PipelineTaskDetail.ArtifactList, UpdateArtifactRequest

enum Artifact.State

artifact.proto:40

Describes the state of the Artifact.

Used in: Artifact

message AssignNotebookRuntimeOperationMetadata

notebook_service.proto:431

Metadata information for [NotebookService.AssignNotebookRuntime][google.cloud.aiplatform.v1.NotebookService.AssignNotebookRuntime].

message Attribution

explanation.proto:103

Attribution that explains a particular prediction output.

Used in: Explanation, ModelExplanation

message AugmentPromptRequest.Model

vertex_rag_service.proto:187

Metadata of the backend deployed model.

Used in: AugmentPromptRequest

message AutomaticResources

machine_resources.proto:149

A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration. Each Model supporting these resources documents its specific guidelines.

Used in: DeployedIndex, DeployedModel, FeatureView.OptimizedConfig, PublisherModel.CallToAction.Deploy

message AutoscalingMetricSpec

machine_resources.proto:240

The metric specification that defines the target resource utilization (CPU utilization, accelerator's duty cycle, and so on) for calculating the desired replica count.

Used in: DedicatedResources

message AvroSource

io.proto:32

The storage details for Avro input content.

Used in: ImportFeatureValuesRequest

message BatchCancelPipelineJobsOperationMetadata

pipeline_service.proto:212

Runtime operation information for [PipelineService.BatchCancelPipelineJobs][google.cloud.aiplatform.v1.PipelineService.BatchCancelPipelineJobs].

message BatchCancelPipelineJobsResponse

pipeline_service.proto:552

Response message for [PipelineService.BatchCancelPipelineJobs][google.cloud.aiplatform.v1.PipelineService.BatchCancelPipelineJobs].

message BatchCreateFeaturesOperationMetadata

featurestore_service.proto:1359

Details of operations that perform batch create Features.

message BatchCreateFeaturesRequest

featurestore_service.proto:986

Request message for [FeaturestoreService.BatchCreateFeatures][google.cloud.aiplatform.v1.FeaturestoreService.BatchCreateFeatures]. Request message for [FeatureRegistryService.BatchCreateFeatures][google.cloud.aiplatform.v1.FeatureRegistryService.BatchCreateFeatures].

Used as request type in: FeatureRegistryService.BatchCreateFeatures, FeaturestoreService.BatchCreateFeatures

message BatchCreateFeaturesResponse

featurestore_service.proto:1009

Response message for [FeaturestoreService.BatchCreateFeatures][google.cloud.aiplatform.v1.FeaturestoreService.BatchCreateFeatures].

message BatchDedicatedResources

machine_resources.proto:172

A description of resources that are used for performing batch operations, are dedicated to a Model, and need manual configuration.

Used in: BatchPredictionJob

message BatchDeletePipelineJobsResponse

pipeline_service.proto:507

Response message for [PipelineService.BatchDeletePipelineJobs][google.cloud.aiplatform.v1.PipelineService.BatchDeletePipelineJobs].

message BatchMigrateResourcesOperationMetadata

migration_service.proto:288

Runtime operation information for [MigrationService.BatchMigrateResources][google.cloud.aiplatform.v1.MigrationService.BatchMigrateResources].

message BatchMigrateResourcesOperationMetadata.PartialResult

migration_service.proto:291

Represents a partial result in batch migration operation for one [MigrateResourceRequest][google.cloud.aiplatform.v1.MigrateResourceRequest].

Used in: BatchMigrateResourcesOperationMetadata

message BatchMigrateResourcesResponse

migration_service.proto:261

Response message for [MigrationService.BatchMigrateResources][google.cloud.aiplatform.v1.MigrationService.BatchMigrateResources].

message BatchPredictionJob

batch_prediction_job.proto:47

A job that uses a [Model][google.cloud.aiplatform.v1.BatchPredictionJob.model] to produce predictions on multiple [input instances][google.cloud.aiplatform.v1.BatchPredictionJob.input_config]. If predictions for significant portion of the instances fail, the job may finish without attempting predictions for all remaining instances.

Used as response type in: JobService.CreateBatchPredictionJob, JobService.GetBatchPredictionJob

Used as field type in: CreateBatchPredictionJobRequest, ListBatchPredictionJobsResponse

message BatchPredictionJob.InputConfig

batch_prediction_job.proto:58

Configures the input to [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. See [Model.supported_input_storage_formats][google.cloud.aiplatform.v1.Model.supported_input_storage_formats] for Model's supported input formats, and how instances should be expressed via any of them.

Used in: BatchPredictionJob

message BatchPredictionJob.InstanceConfig

batch_prediction_job.proto:80

Configuration defining how to transform batch prediction input instances to the instances that the Model accepts.

Used in: BatchPredictionJob

message BatchPredictionJob.OutputConfig

batch_prediction_job.proto:181

Configures the output of [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob]. See [Model.supported_output_storage_formats][google.cloud.aiplatform.v1.Model.supported_output_storage_formats] for supported output formats, and how predictions are expressed via any of them.

Used in: BatchPredictionJob

message BatchPredictionJob.OutputInfo

batch_prediction_job.proto:240

Further describes this job's output. Supplements [output_config][google.cloud.aiplatform.v1.BatchPredictionJob.output_config].

Used in: BatchPredictionJob

message BatchReadFeatureValuesOperationMetadata

featurestore_service.proto:1335

Details of operations that batch reads Feature values.

message BatchReadFeatureValuesRequest.EntityTypeSpec

featurestore_service.proto:597

Selects Features of an EntityType to read values of and specifies read settings.

Used in: BatchReadFeatureValuesRequest

message BatchReadFeatureValuesRequest.PassThroughField

featurestore_service.proto:588

Describe pass-through fields in read_instance source.

Used in: BatchReadFeatureValuesRequest

message BatchReadFeatureValuesResponse

featurestore_service.proto:786

Response message for [FeaturestoreService.BatchReadFeatureValues][google.cloud.aiplatform.v1.FeaturestoreService.BatchReadFeatureValues].

(message has no fields)

message BigQueryDestination

io.proto:70

The BigQuery location for the output content.

Used in: BatchPredictionJob.OutputConfig, FeatureValueDestination, ImportRagFilesConfig, InputDataConfig, ModelMonitoringObjectiveConfig.ExplanationConfig.ExplanationBaseline, PredictRequestResponseLoggingConfig

message BigQuerySource

io.proto:61

The BigQuery location for the input content.

Used in: BatchPredictionJob.InputConfig, BatchReadFeatureValuesRequest, FeatureGroup.BigQuery, ImportFeatureValuesRequest, ModelMonitoringObjectiveConfig.TrainingDataset

message BleuInput

evaluation_service.proto:292

Input for bleu metric.

Used in: EvaluateInstancesRequest

message BleuInstance

evaluation_service.proto:301

Spec for bleu instance.

Used in: BleuInput

message BleuMetricValue

evaluation_service.proto:324

Bleu metric value for an instance.

Used in: BleuResults

message BleuResults

evaluation_service.proto:317

Results for bleu metric.

Used in: EvaluateInstancesResponse

message BleuSpec

evaluation_service.proto:311

Spec for bleu score metric - calculates the precision of n-grams in the prediction as compared to reference - returns a score ranging between 0 to 1.

Used in: BleuInput

message Blob

content.proto:143

Content blob. It's preferred to send as [text][google.cloud.aiplatform.v1.Part.text] directly rather than raw bytes.

Used in: Part

message BlurBaselineConfig

explanation.proto:423

Config for blur baseline. When enabled, a linear path from the maximally blurred image to the input image is created. Using a blurred baseline instead of zero (black image) is motivated by the BlurIG approach explained here: https://arxiv.org/abs/2004.03383

Used in: IntegratedGradientsAttribution, XraiAttribution

message BoolArray

types.proto:28

A list of boolean values.

Used in: FeatureValue

message CachedContent

cached_content.proto:37

A resource used in LLM queries for users to explicitly specify what to cache and how to cache.

Used as response type in: GenAiCacheService.CreateCachedContent, GenAiCacheService.GetCachedContent, GenAiCacheService.UpdateCachedContent

Used as field type in: CreateCachedContentRequest, ListCachedContentsResponse, UpdateCachedContentRequest

message CachedContent.UsageMetadata

cached_content.proto:46

Metadata on the usage of the cached content.

Used in: CachedContent

message Candidate

content.proto:416

A response candidate generated from the model.

Used in: GenerateContentResponse

enum Candidate.FinishReason

content.proto:419

The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

Used in: Candidate

message CheckTrialEarlyStoppingStateMetatdata

vizier_service.proto:523

This message will be placed in the metadata field of a google.longrunning.Operation associated with a CheckTrialEarlyStoppingState request.

message CheckTrialEarlyStoppingStateResponse

vizier_service.proto:515

Response message for [VizierService.CheckTrialEarlyStoppingState][google.cloud.aiplatform.v1.VizierService.CheckTrialEarlyStoppingState].

message Checkpoint

model.proto:983

Describes the machine learning model version checkpoint.

Used in: Model

message Citation

content.proto:394

Source attributions for content.

Used in: CitationMetadata

message CitationMetadata

content.proto:388

A collection of source attributions for a piece of content.

Used in: Candidate

message Claim

vertex_rag_service.proto:306

Claim that is extracted from the input text and facts that support it.

Used in: CorroborateContentResponse

message ClientConnectionConfig

endpoint.proto:369

Configurations (e.g. inference timeout) that are applied on your endpoints.

Used in: Endpoint

message CodeExecutionResult

tool.proto:181

Result of executing the [ExecutableCode]. Always follows a `part` containing the [ExecutableCode].

Used in: Part

enum CodeExecutionResult.Outcome

tool.proto:183

Enumeration of possible outcomes of the code execution.

Used in: CodeExecutionResult

message CoherenceInput

evaluation_service.proto:374

Input for coherence metric.

Used in: EvaluateInstancesRequest

message CoherenceInstance

evaluation_service.proto:383

Spec for coherence instance.

Used in: CoherenceInput

message CoherenceResult

evaluation_service.proto:395

Spec for coherence result.

Used in: EvaluateInstancesResponse

message CoherenceSpec

evaluation_service.proto:389

Spec for coherence score metric.

Used in: CoherenceInput

message CometInput

evaluation_service.proto:1210

Input for Comet metric.

Used in: EvaluateInstancesRequest

message CometInstance

evaluation_service.proto:1243

Spec for Comet instance - The fields used for evaluation are dependent on the comet version.

Used in: CometInput

message CometResult

evaluation_service.proto:1256

Spec for Comet result - calculates the comet score for the given instance using the version specified in the spec.

Used in: EvaluateInstancesResponse

message CometSpec

evaluation_service.proto:1219

Spec for Comet metric.

Used in: CometInput

enum CometSpec.CometVersion

evaluation_service.proto:1221

Comet version options.

Used in: CometSpec

message CompletionStats

completion_stats.proto:31

Success and error statistics of processing multiple entities (for example, DataItems or structured data rows) in batch.

Used in: BatchPredictionJob

message ContainerRegistryDestination

io.proto:98

The Container Registry location for the container image.

Used in: ExportModelRequest.OutputConfig

message ContainerSpec

custom_job.proto:315

The spec of a Container.

Used in: WorkerPoolSpec

message Content

content.proto:82

The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.

Used in: AugmentPromptRequest, AugmentPromptResponse, CachedContent, Candidate, ComputeTokensRequest, CorroborateContentRequest, CountTokensRequest, GenerateContentRequest, SupervisedTuningDataStats

message Context

context.proto:33

Instance of a general context.

Used as response type in: MetadataService.CreateContext, MetadataService.GetContext, MetadataService.UpdateContext

Used as field type in: CreateContextRequest, ListContextsResponse, PipelineJobDetail, UpdateContextRequest

message CopyModelOperationMetadata

model_service.proto:776

Details of [ModelService.CopyModel][google.cloud.aiplatform.v1.ModelService.CopyModel] operation.

message CopyModelResponse

model_service.proto:784

Response message of [ModelService.CopyModel][google.cloud.aiplatform.v1.ModelService.CopyModel] operation.

message CorpusStatus

vertex_rag_data.proto:155

RagCorpus status.

Used in: RagCorpus

enum CorpusStatus.State

vertex_rag_data.proto:157

RagCorpus life state.

Used in: CorpusStatus

message CorroborateContentRequest.Parameters

vertex_rag_service.proto:233

Parameters that can be overrided per request.

Used in: CorroborateContentRequest

message CreateDatasetOperationMetadata

dataset_service.proto:292

Runtime operation information for [DatasetService.CreateDataset][google.cloud.aiplatform.v1.DatasetService.CreateDataset].

message CreateDatasetVersionOperationMetadata

dataset_service.proto:510

Runtime operation information for [DatasetService.CreateDatasetVersion][google.cloud.aiplatform.v1.DatasetService.CreateDatasetVersion].

message CreateDeploymentResourcePoolOperationMetadata

deployment_resource_pool_service.proto:142

Runtime operation information for CreateDeploymentResourcePool method.

message CreateEndpointOperationMetadata

endpoint_service.proto:193

Runtime operation information for [EndpointService.CreateEndpoint][google.cloud.aiplatform.v1.EndpointService.CreateEndpoint].

message CreateEntityTypeOperationMetadata

featurestore_service.proto:1347

Details of operations that perform create EntityType.

message CreateFeatureGroupOperationMetadata

feature_registry_service.proto:329

Details of operations that perform create FeatureGroup.

message CreateFeatureOnlineStoreOperationMetadata

feature_online_store_admin_service.proto:512

Details of operations that perform create FeatureOnlineStore.

message CreateFeatureOperationMetadata

featurestore_service.proto:1353

Details of operations that perform create Feature.

message CreateFeatureRequest

featurestore_service.proto:956

Request message for [FeaturestoreService.CreateFeature][google.cloud.aiplatform.v1.FeaturestoreService.CreateFeature]. Request message for [FeatureRegistryService.CreateFeature][google.cloud.aiplatform.v1.FeatureRegistryService.CreateFeature].

Used as request type in: FeatureRegistryService.CreateFeature, FeaturestoreService.CreateFeature

Used as field type in: BatchCreateFeaturesRequest

message CreateFeatureViewOperationMetadata

feature_online_store_admin_service.proto:524

Details of operations that perform create FeatureView.

message CreateFeaturestoreOperationMetadata

featurestore_service.proto:1287

Details of operations that perform create Featurestore.

message CreateIndexEndpointOperationMetadata

index_endpoint_service.proto:159

Runtime operation information for [IndexEndpointService.CreateIndexEndpoint][google.cloud.aiplatform.v1.IndexEndpointService.CreateIndexEndpoint].

message CreateIndexOperationMetadata

index_service.proto:137

Runtime operation information for [IndexService.CreateIndex][google.cloud.aiplatform.v1.IndexService.CreateIndex].

message CreateMetadataStoreOperationMetadata

metadata_service.proto:418

Details of operations that perform [MetadataService.CreateMetadataStore][google.cloud.aiplatform.v1.MetadataService.CreateMetadataStore].

message CreateNotebookExecutionJobOperationMetadata

notebook_service.proto:670

Metadata information for [NotebookService.CreateNotebookExecutionJob][google.cloud.aiplatform.v1.NotebookService.CreateNotebookExecutionJob].

message CreateNotebookExecutionJobRequest

notebook_service.proto:650

Request message for [NotebookService.CreateNotebookExecutionJob]

Used as request type in: NotebookService.CreateNotebookExecutionJob

Used as field type in: Schedule

message CreateNotebookRuntimeTemplateOperationMetadata

notebook_service.proto:271

Metadata information for [NotebookService.CreateNotebookRuntimeTemplate][google.cloud.aiplatform.v1.NotebookService.CreateNotebookRuntimeTemplate].

message CreatePersistentResourceOperationMetadata

persistent_resource_service.proto:143

Details of operations that perform create PersistentResource.

message CreatePipelineJobRequest

pipeline_service.proto:340

Request message for [PipelineService.CreatePipelineJob][google.cloud.aiplatform.v1.PipelineService.CreatePipelineJob].

Used as request type in: PipelineService.CreatePipelineJob

Used as field type in: Schedule

message CreateRagCorpusOperationMetadata

vertex_rag_data_service.proto:386

Runtime operation information for [VertexRagDataService.CreateRagCorpus][google.cloud.aiplatform.v1.VertexRagDataService.CreateRagCorpus].

message CreateReasoningEngineOperationMetadata

reasoning_engine_service.proto:121

Details of [ReasoningEngineService.CreateReasoningEngine][google.cloud.aiplatform.v1.ReasoningEngineService.CreateReasoningEngine] operation.

message CreateRegistryFeatureOperationMetadata

feature_registry_service.proto:341

Details of operations that perform create FeatureGroup.

message CreateSpecialistPoolOperationMetadata

specialist_pool_service.proto:125

Runtime operation information for [SpecialistPoolService.CreateSpecialistPool][google.cloud.aiplatform.v1.SpecialistPoolService.CreateSpecialistPool].

message CreateTensorboardOperationMetadata

tensorboard_service.proto:1155

Details of operations that perform create Tensorboard.

message CreateTensorboardRunRequest

tensorboard_service.proto:695

Request message for [TensorboardService.CreateTensorboardRun][google.cloud.aiplatform.v1.TensorboardService.CreateTensorboardRun].

Used as request type in: TensorboardService.CreateTensorboardRun

Used as field type in: BatchCreateTensorboardRunsRequest

message CreateTensorboardTimeSeriesRequest

tensorboard_service.proto:870

Request message for [TensorboardService.CreateTensorboardTimeSeries][google.cloud.aiplatform.v1.TensorboardService.CreateTensorboardTimeSeries].

Used as request type in: TensorboardService.CreateTensorboardTimeSeries

Used as field type in: BatchCreateTensorboardTimeSeriesRequest

message CsvDestination

io.proto:86

The storage details for CSV output content.

Used in: FeatureValueDestination

message CsvSource

io.proto:38

The storage details for CSV input content.

Used in: BatchReadFeatureValuesRequest, EntityIdSelector, ImportFeatureValuesRequest

message CustomJob

custom_job.proto:42

Represents a job that runs custom workloads such as a Docker container or a Python package. A CustomJob can have multiple worker pools and each worker pool can have its own machine and input spec. A CustomJob will be cleaned up once the job enters terminal state (failed or succeeded).

Used as response type in: JobService.CreateCustomJob, JobService.GetCustomJob

Used as field type in: CreateCustomJobRequest, ListCustomJobsResponse

message CustomJobSpec

custom_job.proto:121

Represents the spec of a CustomJob.

Used in: CustomJob, HyperparameterTuningJob, NasJobSpec.MultiTrialAlgorithmSpec.SearchTrialSpec, NasJobSpec.MultiTrialAlgorithmSpec.TrainTrialSpec

message DataItem

data_item.proto:34

A piece of data in a Dataset. Could be an image, a video, a document or plain text.

Used in: DataItemView, ListDataItemsResponse

message DataItemView

dataset_service.proto:767

A container for a single DataItem and Annotations on it.

Used in: SearchDataItemsResponse

message DataLabelingJob

data_labeling_job.proto:38

DataLabelingJob is used to trigger a human labeling job on unlabeled data from the following Dataset:

Used as response type in: JobService.CreateDataLabelingJob, JobService.GetDataLabelingJob

Used as field type in: CreateDataLabelingJobRequest, ListDataLabelingJobsResponse

message Dataset

dataset.proto:36

A collection of DataItems and Annotations on them.

Used as response type in: DatasetService.GetDataset, DatasetService.UpdateDataset

Used as field type in: CreateDatasetRequest, ListDatasetsResponse, UpdateDatasetRequest

message DatasetVersion

dataset_version.proto:33

Describes the dataset version.

Used as response type in: DatasetService.GetDatasetVersion, DatasetService.UpdateDatasetVersion

Used as field type in: CreateDatasetVersionRequest, ListDatasetVersionsResponse, UpdateDatasetVersionRequest

message DedicatedResources

machine_resources.proto:71

A description of resources that are dedicated to a DeployedModel, and that need a higher degree of manual configuration.

Used in: DeployedIndex, DeployedModel, DeploymentResourcePool, PublisherModel.CallToAction.Deploy

message DeleteFeatureRequest

featurestore_service.proto:1273

Request message for [FeaturestoreService.DeleteFeature][google.cloud.aiplatform.v1.FeaturestoreService.DeleteFeature]. Request message for [FeatureRegistryService.DeleteFeature][google.cloud.aiplatform.v1.FeatureRegistryService.DeleteFeature].

Used as request type in: FeatureRegistryService.DeleteFeature, FeaturestoreService.DeleteFeature

message DeleteFeatureValuesOperationMetadata

featurestore_service.proto:1341

Details of operations that delete Feature values.

message DeleteFeatureValuesRequest.SelectEntity

featurestore_service.proto:1370

Message to select entity. If an entity id is selected, all the feature values corresponding to the entity id will be deleted, including the entityId.

Used in: DeleteFeatureValuesRequest

message DeleteFeatureValuesRequest.SelectTimeRangeAndFeature

featurestore_service.proto:1383

Message to select time range and feature. Values of the selected feature generated within an inclusive time range will be deleted. Using this option permanently deletes the feature values from the specified feature IDs within the specified time range. This might include data from the online storage. If you want to retain any deleted historical data in the online storage, you must re-ingest it.

Used in: DeleteFeatureValuesRequest

message DeleteFeatureValuesResponse

featurestore_service.proto:1423

Response message for [FeaturestoreService.DeleteFeatureValues][google.cloud.aiplatform.v1.FeaturestoreService.DeleteFeatureValues].

message DeleteFeatureValuesResponse.SelectEntity

featurestore_service.proto:1425

Response message if the request uses the SelectEntity option.

Used in: DeleteFeatureValuesResponse

message DeleteFeatureValuesResponse.SelectTimeRangeAndFeature

featurestore_service.proto:1437

Response message if the request uses the SelectTimeRangeAndFeature option.

Used in: DeleteFeatureValuesResponse

message DeleteMetadataStoreOperationMetadata

metadata_service.proto:497

Details of operations that perform [MetadataService.DeleteMetadataStore][google.cloud.aiplatform.v1.MetadataService.DeleteMetadataStore].

message DeleteOperationMetadata

operation.proto:52

Details of operations that perform deletes of any entities.

message DeployIndexOperationMetadata

index_endpoint_service.proto:291

Runtime operation information for [IndexEndpointService.DeployIndex][google.cloud.aiplatform.v1.IndexEndpointService.DeployIndex].

message DeployIndexResponse

index_endpoint_service.proto:284

Response message for [IndexEndpointService.DeployIndex][google.cloud.aiplatform.v1.IndexEndpointService.DeployIndex].

message DeployModelOperationMetadata

endpoint_service.proto:372

Runtime operation information for [EndpointService.DeployModel][google.cloud.aiplatform.v1.EndpointService.DeployModel].

message DeployModelResponse

endpoint_service.proto:365

Response message for [EndpointService.DeployModel][google.cloud.aiplatform.v1.EndpointService.DeployModel].

message DeployedIndex

index_endpoint.proto:140

A deployment of an Index. IndexEndpoints contain one or more DeployedIndexes.

Used in: DeployIndexRequest, DeployIndexResponse, IndexEndpoint, MutateDeployedIndexRequest, MutateDeployedIndexResponse

message DeployedIndexAuthConfig

index_endpoint.proto:272

Used to set up the auth on the DeployedIndex's private endpoint.

Used in: DeployedIndex

message DeployedIndexAuthConfig.AuthProvider

index_endpoint.proto:276

Configuration for an authentication provider, including support for [JSON Web Token (JWT)](https://tools.ietf.org/html/draft-ietf-oauth-json-web-token-32).

Used in: DeployedIndexAuthConfig

message DeployedIndexRef

deployed_index_ref.proto:31

Points to a DeployedIndex.

Used in: Index

message DeployedModel

endpoint.proto:182

A deployment of a Model. Endpoints contain one or more DeployedModels.

Used in: DeployModelRequest, DeployModelResponse, Endpoint, MutateDeployedModelRequest, MutateDeployedModelResponse, QueryDeployedModelsResponse

message DeployedModel.Status

endpoint.proto:184

Runtime status of the deployed model.

Used in: DeployedModel

message DeployedModelRef

deployed_model_ref.proto:31

Points to a DeployedModel.

Used in: Model, QueryDeployedModelsResponse

message DeploymentResourcePool

deployment_resource_pool.proto:35

A description of resources that can be shared by multiple DeployedModels, whose underlying specification consists of a DedicatedResources.

Used as response type in: DeploymentResourcePoolService.GetDeploymentResourcePool

Used as field type in: CreateDeploymentResourcePoolRequest, ListDeploymentResourcePoolsResponse, UpdateDeploymentResourcePoolRequest

message DestinationFeatureSetting

featurestore_service.proto:742

Used in: BatchReadFeatureValuesRequest.EntityTypeSpec, ExportFeatureValuesRequest

message DirectUploadSource

io.proto:141

The input content is encapsulated and uploaded in the request.

Used in: RagFile

(message has no fields)

message DiskSpec

machine_resources.proto:198

Represents the spec of disk options.

Used in: ResourcePool, WorkerPoolSpec

message DoubleArray

types.proto:34

A list of double values.

Used in: FeatureValue

message DynamicRetrievalConfig

tool.proto:290

Describes the options to customize dynamic retrieval.

Used in: GoogleSearchRetrieval

enum DynamicRetrievalConfig.Mode

tool.proto:292

The mode of the predictor to be used in dynamic retrieval.

Used in: DynamicRetrievalConfig

message EncryptionSpec

encryption_spec.proto:31

Represents a customer-managed encryption key spec that can be applied to a top-level resource.

Used in: BatchPredictionJob, CachedContent, CopyModelRequest, CustomJob, DataLabelingJob, Dataset, DeploymentResourcePool, Endpoint, FeatureOnlineStore, Featurestore, HyperparameterTuningJob, Index, IndexEndpoint, MetadataStore, Model, ModelDeploymentMonitoringJob, NasJob, NotebookExecutionJob, NotebookRuntime, NotebookRuntimeTemplate, PersistentResource, PipelineJob, Tensorboard, TrainingPipeline, TuningJob

message Endpoint

endpoint.proto:39

Models are deployed into it, and afterwards Endpoint is called to obtain predictions and explanations.

Used as response type in: EndpointService.GetEndpoint, EndpointService.UpdateEndpoint

Used as field type in: CreateEndpointRequest, ListEndpointsResponse, UpdateEndpointLongRunningRequest, UpdateEndpointRequest

message EnterpriseWebSearch

tool.proto:287

Tool to search public web data, powered by Vertex AI Search and Sec4 compliance.

Used in: Tool

(message has no fields)

message EntityIdSelector

featurestore_service.proto:1468

Selector for entityId. Getting ids from the given source.

Used in: DeleteFeatureValuesRequest.SelectEntity

message EntityType

entity_type.proto:35

An entity type is a type of object in a system that needs to be modeled and have stored information about. For example, driver is an entity type, and driver0 is an instance of an entity type driver.

Used as response type in: FeaturestoreService.GetEntityType, FeaturestoreService.UpdateEntityType

Used as field type in: CreateEntityTypeRequest, ListEntityTypesResponse, UpdateEntityTypeRequest

message EnvVar

env_var.proto:30

Represents an environment variable present in a Container or Python Module.

Used in: ContainerSpec, ModelContainerSpec, NotebookSoftwareConfig, PythonPackageSpec, ReasoningEngineSpec.DeploymentSpec

message ErrorAnalysisAnnotation

evaluated_annotation.proto:141

Model error analysis for each annotation.

Used in: EvaluatedAnnotation

message ErrorAnalysisAnnotation.AttributedItem

evaluated_annotation.proto:144

Attributed items for a given annotation, typically representing neighbors from the training sets constrained by the query type.

Used in: ErrorAnalysisAnnotation

enum ErrorAnalysisAnnotation.QueryType

evaluated_annotation.proto:154

The query type used for finding the attributed items.

Used in: ErrorAnalysisAnnotation

message EvaluatedAnnotation

evaluated_annotation.proto:35

True positive, false positive, or false negative. EvaluatedAnnotation is only available under ModelEvaluationSlice with slice of `annotationSpec` dimension.

Used in: BatchImportEvaluatedAnnotationsRequest

enum EvaluatedAnnotation.EvaluatedAnnotationType

evaluated_annotation.proto:37

Describes the type of the EvaluatedAnnotation. The type is determined

Used in: EvaluatedAnnotation

message EvaluatedAnnotationExplanation

evaluated_annotation.proto:127

Explanation result of the prediction produced by the Model.

Used in: EvaluatedAnnotation

message Event

event.proto:33

An edge describing the relationship between an Artifact and an Execution in a lineage graph.

Used in: AddExecutionEventsRequest, LineageSubgraph

enum Event.Type

event.proto:35

Describes whether an Event's Artifact is the Execution's input or output.

Used in: Event

message ExactMatchInput

evaluation_service.proto:256

Input for exact match metric.

Used in: EvaluateInstancesRequest

message ExactMatchInstance

evaluation_service.proto:266

Spec for exact match instance.

Used in: ExactMatchInput

message ExactMatchMetricValue

evaluation_service.proto:286

Exact match metric value for an instance.

Used in: ExactMatchResults

message ExactMatchResults

evaluation_service.proto:279

Results for exact match metric.

Used in: EvaluateInstancesResponse

message ExactMatchSpec

evaluation_service.proto:276

Spec for exact match metric - returns 1 if prediction and reference exactly matches, otherwise 0.

Used in: ExactMatchInput

(message has no fields)

message Examples

explanation.proto:433

Example-based explainability that returns the nearest neighbors from the provided dataset.

Used in: ExplanationParameters, UpdateExplanationDatasetRequest

message Examples.ExampleGcsSource

explanation.proto:435

The Cloud Storage input instances.

Used in: Examples

enum Examples.ExampleGcsSource.DataFormat

explanation.proto:437

The format of the input example instances.

Used in: ExampleGcsSource

message ExamplesOverride

explanation.proto:555

Overrides for example-based explanations.

Used in: ExplanationSpecOverride

enum ExamplesOverride.DataFormat

explanation.proto:557

Data format enum.

Used in: ExamplesOverride

message ExamplesRestrictionsNamespace

explanation.proto:585

Restrictions namespace for example-based explanations overrides.

Used in: ExamplesOverride

message ExecutableCode

tool.proto:161

Code generated by the model that is meant to be executed, and the result returned to the model. Generated when using the [FunctionDeclaration] tool and [FunctionCallingConfig] mode is set to [Mode.CODE].

Used in: Part

enum ExecutableCode.Language

tool.proto:163

Supported programming languages for the generated code.

Used in: ExecutableCode

message Execution

execution.proto:33

Instance of a general execution.

Used as response type in: MetadataService.CreateExecution, MetadataService.GetExecution, MetadataService.UpdateExecution

Used as field type in: CreateExecutionRequest, LineageSubgraph, ListExecutionsResponse, PipelineTaskDetail, UpdateExecutionRequest

enum Execution.State

execution.proto:40

Describes the state of the Execution.

Used in: Execution

message Explanation

explanation.proto:36

Explanation of a prediction (provided in [PredictResponse.predictions][google.cloud.aiplatform.v1.PredictResponse.predictions]) produced by the Model on a given [instance][google.cloud.aiplatform.v1.ExplainRequest.instances].

Used in: EvaluatedAnnotationExplanation, ExplainResponse

message ExplanationMetadata

explanation_metadata.proto:31

Metadata describing the Model's input and output for explanation.

Used in: ExplanationSpec

message ExplanationMetadata.InputMetadata

explanation_metadata.proto:38

Metadata of the input of a feature. Fields other than [InputMetadata.input_baselines][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata.input_baselines] are applicable only for Models that are using Vertex AI-provided images for Tensorflow.

Used in: ExplanationMetadata

enum ExplanationMetadata.InputMetadata.Encoding

explanation_metadata.proto:189

Defines how a feature is encoded. Defaults to IDENTITY.

Used in: InputMetadata

message ExplanationMetadata.InputMetadata.FeatureValueDomain

explanation_metadata.proto:47

Domain details of the input feature value. Provides numeric information about the feature, such as its range (min, max). If the feature has been pre-processed, for example with z-scoring, then it provides information about how to recover the original feature. For example, if the input feature is an image and it has been pre-processed to obtain 0-mean and stddev = 1 values, then original_mean, and original_stddev refer to the mean and stddev of the original feature (e.g. image tensor) from which input feature (with mean = 0 and stddev = 1) was obtained.

Used in: InputMetadata

message ExplanationMetadata.InputMetadata.Visualization

explanation_metadata.proto:66

Visualization configurations for image explanation.

Used in: InputMetadata

enum ExplanationMetadata.InputMetadata.Visualization.ColorMap

explanation_metadata.proto:101

The color scheme used for highlighting areas.

Used in: Visualization

enum ExplanationMetadata.InputMetadata.Visualization.OverlayType

explanation_metadata.proto:127

How the original image is displayed in the visualization.

Used in: Visualization

enum ExplanationMetadata.InputMetadata.Visualization.Polarity

explanation_metadata.proto:84

Whether to only highlight pixels with positive contributions, negative or both. Defaults to POSITIVE.

Used in: Visualization

enum ExplanationMetadata.InputMetadata.Visualization.Type

explanation_metadata.proto:70

Type of the image visualization. Only applicable to [Integrated Gradients attribution][google.cloud.aiplatform.v1.ExplanationParameters.integrated_gradients_attribution].

Used in: Visualization

message ExplanationMetadata.OutputMetadata

explanation_metadata.proto:338

Metadata of the prediction output to be explained.

Used in: ExplanationMetadata

message ExplanationMetadataOverride

explanation.proto:530

The [ExplanationMetadata][google.cloud.aiplatform.v1.ExplanationMetadata] entries that can be overridden at [online explanation][google.cloud.aiplatform.v1.PredictionService.Explain] time.

Used in: ExplanationSpecOverride

message ExplanationMetadataOverride.InputMetadataOverride

explanation.proto:534

The [input metadata][google.cloud.aiplatform.v1.ExplanationMetadata.InputMetadata] entries to be overridden.

Used in: ExplanationMetadataOverride

message ExplanationParameters

explanation.proto:230

Parameters to configure explaining for Model's predictions.

Used in: ExplanationSpec, ExplanationSpecOverride

message ExplanationSpec

explanation.proto:221

Specification of Model explanation.

Used in: BatchPredictionJob, DeployedModel, Model, ModelEvaluation.ModelEvaluationExplanationSpec

message ExplanationSpecOverride

explanation.proto:514

The [ExplanationSpec][google.cloud.aiplatform.v1.ExplanationSpec] entries that can be overridden at [online explanation][google.cloud.aiplatform.v1.PredictionService.Explain] time.

Used in: ExplainRequest

message ExportDataConfig

dataset.proto:174

Describes what part of the Dataset is to be exported, the destination of the export and how to export.

Used in: ExportDataRequest

enum ExportDataConfig.ExportUse

dataset.proto:179

ExportUse indicates the usage of the exported files. It restricts file destination, format, annotations to be exported, whether to allow unannotated data to be exported and whether to clone files to temp Cloud Storage bucket.

Used in: ExportDataConfig

message ExportDataOperationMetadata

dataset_service.proto:480

Runtime operation information for [DatasetService.ExportData][google.cloud.aiplatform.v1.DatasetService.ExportData].

message ExportDataResponse

dataset_service.proto:465

Response message for [DatasetService.ExportData][google.cloud.aiplatform.v1.DatasetService.ExportData].

message ExportFeatureValuesOperationMetadata

featurestore_service.proto:1329

Details of operations that exports Features values.

message ExportFeatureValuesRequest.FullExport

featurestore_service.proto:698

Describes exporting all historical Feature values of all entities of the EntityType between [start_time, end_time].

Used in: ExportFeatureValuesRequest

message ExportFeatureValuesRequest.SnapshotExport

featurestore_service.proto:684

Describes exporting the latest Feature values of all entities of the EntityType between [start_time, snapshot_time].

Used in: ExportFeatureValuesRequest

message ExportFeatureValuesResponse

featurestore_service.proto:782

Response message for [FeaturestoreService.ExportFeatureValues][google.cloud.aiplatform.v1.FeaturestoreService.ExportFeatureValues].

(message has no fields)

message ExportFilterSplit

dataset.proto:292

Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign). Supported only for unstructured Datasets.

Used in: ExportDataConfig

message ExportFractionSplit

dataset.proto:274

Assigns the input data to training, validation, and test sets as per the given fractions. Any of `training_fraction`, `validation_fraction` and `test_fraction` may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.

Used in: ExportDataConfig

message ExportModelOperationMetadata

model_service.proto:694

Details of [ModelService.ExportModel][google.cloud.aiplatform.v1.ModelService.ExportModel] operation.

message ExportModelOperationMetadata.OutputInfo

model_service.proto:697

Further describes the output of the ExportModel. Supplements [ExportModelRequest.OutputConfig][google.cloud.aiplatform.v1.ExportModelRequest.OutputConfig].

Used in: ExportModelOperationMetadata

message ExportModelRequest.OutputConfig

model_service.proto:652

Output configuration for the Model export.

Used in: ExportModelRequest

message ExportModelResponse

model_service.proto:724

Response message of [ModelService.ExportModel][google.cloud.aiplatform.v1.ModelService.ExportModel] operation.

(message has no fields)

message Fact

vertex_rag_service.proto:275

The fact used in grounding.

Used in: AugmentPromptResponse, CorroborateContentRequest

message FasterDeploymentConfig

endpoint.proto:375

Configuration for faster model deployment.

Used in: DeployedModel

message Feature

feature.proto:34

Feature Metadata information. For example, color is a feature that describes an apple.

Used as response type in: FeatureRegistryService.GetFeature, FeaturestoreService.GetFeature, FeaturestoreService.UpdateFeature

Used as field type in: BatchCreateFeaturesResponse, CreateFeatureRequest, ListFeaturesResponse, SearchFeaturesResponse, UpdateFeatureRequest

message Feature.MonitoringStatsAnomaly

feature.proto:50

A list of historical [SnapshotAnalysis][google.cloud.aiplatform.v1.FeaturestoreMonitoringConfig.SnapshotAnalysis] or [ImportFeaturesAnalysis][google.cloud.aiplatform.v1.FeaturestoreMonitoringConfig.ImportFeaturesAnalysis] stats requested by user, sorted by [FeatureStatsAnomaly.start_time][google.cloud.aiplatform.v1.FeatureStatsAnomaly.start_time] descending.

Used in: Feature

enum Feature.MonitoringStatsAnomaly.Objective

feature.proto:55

If the objective in the request is both Import Feature Analysis and Snapshot Analysis, this objective could be one of them. Otherwise, this objective should be the same as the objective in the request.

Used in: MonitoringStatsAnomaly

enum Feature.ValueType

feature.proto:76

Only applicable for Vertex AI Legacy Feature Store. An enum representing the value type of a feature.

Used in: Feature

message FeatureGroup

feature_group.proto:33

Vertex AI Feature Group.

Used as response type in: FeatureRegistryService.GetFeatureGroup

Used as field type in: CreateFeatureGroupRequest, ListFeatureGroupsResponse, UpdateFeatureGroupRequest

message FeatureGroup.BigQuery

feature_group.proto:42

Input source type for BigQuery Tables and Views.

Used in: FeatureGroup

message FeatureGroup.BigQuery.TimeSeries

feature_group.proto:43

Used in: BigQuery

message FeatureNoiseSigma

explanation.proto:397

Noise sigma by features. Noise sigma represents the standard deviation of the gaussian kernel that will be used to add noise to interpolated inputs prior to computing gradients.

Used in: SmoothGradConfig

message FeatureNoiseSigma.NoiseSigmaForFeature

explanation.proto:399

Noise sigma for a single feature.

Used in: FeatureNoiseSigma

message FeatureOnlineStore

feature_online_store.proto:36

Vertex AI Feature Online Store provides a centralized repository for serving ML features and embedding indexes at low latency. The Feature Online Store is a top-level container.

Used as response type in: FeatureOnlineStoreAdminService.GetFeatureOnlineStore

Used as field type in: CreateFeatureOnlineStoreRequest, ListFeatureOnlineStoresResponse, UpdateFeatureOnlineStoreRequest

message FeatureOnlineStore.Bigtable

feature_online_store.proto:42

Used in: FeatureOnlineStore

message FeatureOnlineStore.Bigtable.AutoScaling

feature_online_store.proto:43

Used in: Bigtable

message FeatureOnlineStore.DedicatedServingEndpoint

feature_online_store.proto:71

The dedicated serving endpoint for this FeatureOnlineStore. Only need to set when you choose Optimized storage type. Public endpoint is provisioned by default.

Used in: FeatureOnlineStore

message FeatureOnlineStore.Optimized

feature_online_store.proto:66

Optimized storage type

Used in: FeatureOnlineStore

(message has no fields)

enum FeatureOnlineStore.State

feature_online_store.proto:92

Possible states a featureOnlineStore can have.

Used in: FeatureOnlineStore

message FeatureSelector

feature_selector.proto:41

Selector for Features of an EntityType.

Used in: BatchReadFeatureValuesRequest.EntityTypeSpec, DeleteFeatureValuesRequest.SelectTimeRangeAndFeature, ExportFeatureValuesRequest, ReadFeatureValuesRequest, StreamingReadFeatureValuesRequest

message FeatureStatsAnomaly

feature_monitoring_stats.proto:38

Stats and Anomaly generated at specific timestamp for specific Feature. The start_time and end_time are used to define the time range of the dataset that current stats belongs to, e.g. prediction traffic is bucketed into prediction datasets by time window. If the Dataset is not defined by time window, start_time = end_time. Timestamp of the stats and anomalies always refers to end_time. Raw stats and anomalies are stored in stats_uri or anomaly_uri in the tensorflow defined protos. Field data_stats contains almost identical information with the raw stats in Vertex AI defined proto, for UI to display.

Used in: Feature.MonitoringStatsAnomaly, ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies

message FeatureValue

featurestore_online_service.proto:235

Value for a feature.

Used in: FeatureValueList, FetchFeatureValuesResponse.FeatureNameValuePairList.FeatureNameValuePair, ReadFeatureValuesResponse.EntityView.Data, StructFieldValue, WriteFeatureValuesPayload

message FeatureValue.Metadata

featurestore_online_service.proto:237

Metadata of feature value.

Used in: FeatureValue

message FeatureValueDestination

featurestore_service.proto:752

A destination location for Feature values and format.

Used in: BatchReadFeatureValuesRequest, ExportFeatureValuesRequest

message FeatureValueList

featurestore_online_service.proto:300

Container for list of values.

Used in: ReadFeatureValuesResponse.EntityView.Data

message FeatureView

feature_view.proto:34

FeatureView is representation of values that the FeatureOnlineStore will serve based on its syncConfig.

Used as response type in: FeatureOnlineStoreAdminService.GetFeatureView

Used as field type in: CreateFeatureViewRequest, ListFeatureViewsResponse, UpdateFeatureViewRequest

message FeatureView.BigQuerySource

feature_view.proto:40

Used in: FeatureView

message FeatureView.FeatureRegistrySource

feature_view.proto:146

A Feature Registry source for features that need to be synced to Online Store.

Used in: FeatureView

message FeatureView.FeatureRegistrySource.FeatureGroup

feature_view.proto:149

Features belonging to a single feature group that will be synced to Online Store.

Used in: FeatureRegistrySource

message FeatureView.IndexConfig

feature_view.proto:66

Configuration for vector indexing.

Used in: FeatureView

message FeatureView.IndexConfig.BruteForceConfig

feature_view.proto:68

Configuration options for using brute force search.

Used in: IndexConfig

(message has no fields)

enum FeatureView.IndexConfig.DistanceMeasureType

feature_view.proto:79

The distance measure used in nearest neighbor search.

Used in: IndexConfig

message FeatureView.IndexConfig.TreeAHConfig

feature_view.proto:71

Configuration options for the tree-AH algorithm.

Used in: IndexConfig

message FeatureView.OptimizedConfig

feature_view.proto:185

Configuration for FeatureViews created in Optimized FeatureOnlineStore.

Used in: FeatureView

enum FeatureView.ServiceAgentType

feature_view.proto:196

Service agent type used during data sync.

Used in: FeatureView

message FeatureView.SyncConfig

feature_view.proto:51

Configuration for Sync. Only one option is set.

Used in: FeatureView

message FeatureView.VertexRagSource

feature_view.proto:167

A Vertex Rag source for features that need to be synced to Online Store.

Used in: FeatureView

enum FeatureViewDataFormat

feature_online_store_service.proto:63

Format of the data in the Feature View.

Used in: FetchFeatureValuesRequest

message FeatureViewDataKey

feature_online_store_service.proto:75

Lookup key for a feature view.

Used in: FetchFeatureValuesRequest, FetchFeatureValuesResponse

message FeatureViewDataKey.CompositeKey

feature_online_store_service.proto:77

ID that is comprised from several parts (columns).

Used in: FeatureViewDataKey

message FeatureViewSync

feature_view_sync.proto:35

FeatureViewSync is a representation of sync operation which copies data from data source to Feature View in Online Store.

Used as response type in: FeatureOnlineStoreAdminService.GetFeatureViewSync

Used as field type in: ListFeatureViewSyncsResponse

message FeatureViewSync.SyncSummary

feature_view_sync.proto:43

Summary from the Sync job. For continuous syncs, the summary is updated periodically. For batch syncs, it gets updated on completion of the sync.

Used in: FeatureViewSync

message Featurestore

featurestore.proto:35

Vertex AI Feature Store provides a centralized repository for organizing, storing, and serving ML features. The Featurestore is a top-level container for your features and their values.

Used as response type in: FeaturestoreService.GetFeaturestore

Used as field type in: CreateFeaturestoreRequest, ListFeaturestoresResponse, UpdateFeaturestoreRequest

message Featurestore.OnlineServingConfig

featurestore.proto:43

OnlineServingConfig specifies the details for provisioning online serving resources.

Used in: Featurestore

message Featurestore.OnlineServingConfig.Scaling

featurestore.proto:47

Online serving scaling configuration. If min_node_count and max_node_count are set to the same value, the cluster will be configured with the fixed number of node (no auto-scaling).

Used in: OnlineServingConfig

enum Featurestore.State

featurestore.proto:79

Possible states a featurestore can have.

Used in: Featurestore

message FeaturestoreMonitoringConfig

featurestore_monitoring.proto:28

Configuration of how features in Featurestore are monitored.

Used in: EntityType

message FeaturestoreMonitoringConfig.ImportFeaturesAnalysis

featurestore_monitoring.proto:62

Configuration of the Featurestore's ImportFeature Analysis Based Monitoring. This type of analysis generates statistics for values of each Feature imported by every [ImportFeatureValues][google.cloud.aiplatform.v1.FeaturestoreService.ImportFeatureValues] operation.

Used in: FeaturestoreMonitoringConfig

enum FeaturestoreMonitoringConfig.ImportFeaturesAnalysis.Baseline

featurestore_monitoring.proto:91

Defines the baseline to do anomaly detection for feature values imported by each [ImportFeatureValues][google.cloud.aiplatform.v1.FeaturestoreService.ImportFeatureValues] operation.

Used in: ImportFeaturesAnalysis

enum FeaturestoreMonitoringConfig.ImportFeaturesAnalysis.State

featurestore_monitoring.proto:64

The state defines whether to enable ImportFeature analysis.

Used in: ImportFeaturesAnalysis

message FeaturestoreMonitoringConfig.SnapshotAnalysis

featurestore_monitoring.proto:33

Configuration of the Featurestore's Snapshot Analysis Based Monitoring. This type of analysis generates statistics for each Feature based on a snapshot of the latest feature value of each entities every monitoring_interval.

Used in: FeaturestoreMonitoringConfig

message FeaturestoreMonitoringConfig.ThresholdConfig

featurestore_monitoring.proto:119

The config for Featurestore Monitoring threshold.

Used in: FeaturestoreMonitoringConfig

message FetchFeatureValuesResponse

feature_online_store_service.proto:118

Response message for [FeatureOnlineStoreService.FetchFeatureValues][google.cloud.aiplatform.v1.FeatureOnlineStoreService.FetchFeatureValues]

Used as response type in: FeatureOnlineStoreService.FetchFeatureValues

Used as field type in: NearestNeighbors.Neighbor

message FetchFeatureValuesResponse.FeatureNameValuePairList

feature_online_store_service.proto:121

Response structure in the format of key (feature name) and (feature) value pair.

Used in: FetchFeatureValuesResponse

message FetchFeatureValuesResponse.FeatureNameValuePairList.FeatureNameValuePair

feature_online_store_service.proto:123

Feature name & value pair.

Used in: FeatureNameValuePairList

message FileData

content.proto:152

URI based data.

Used in: Part

message FileStatus

vertex_rag_data.proto:124

RagFile status.

Used in: RagFile

enum FileStatus.State

vertex_rag_data.proto:126

RagFile state.

Used in: FileStatus

message FilterSplit

training_pipeline.proto:353

Assigns input data to training, validation, and test sets based on the given filters, data pieces not matched by any filter are ignored. Currently only supported for Datasets containing DataItems. If any of the filters in this message are to match nothing, then they can be set as '-' (the minus sign). Supported only for unstructured Datasets.

Used in: InputDataConfig

message FindNeighborsRequest.Query

match_service.proto:64

A query to find a number of the nearest neighbors (most similar vectors) of a vector.

Used in: FindNeighborsRequest

message FindNeighborsRequest.Query.RRF

match_service.proto:66

Parameters for RRF algorithm that combines search results.

Used in: Query

message FindNeighborsResponse.NearestNeighbors

match_service.proto:155

Nearest neighbors for one query.

Used in: FindNeighborsResponse

message FindNeighborsResponse.Neighbor

match_service.proto:140

A neighbor of the query vector.

Used in: NearestNeighbors

message FluencyInput

evaluation_service.proto:407

Input for fluency metric.

Used in: EvaluateInstancesRequest

message FluencyInstance

evaluation_service.proto:416

Spec for fluency instance.

Used in: FluencyInput

message FluencyResult

evaluation_service.proto:428

Spec for fluency result.

Used in: EvaluateInstancesResponse

message FluencySpec

evaluation_service.proto:422

Spec for fluency score metric.

Used in: FluencyInput

message FractionSplit

training_pipeline.proto:334

Assigns the input data to training, validation, and test sets as per the given fractions. Any of `training_fraction`, `validation_fraction` and `test_fraction` may optionally be provided, they must sum to up to 1. If the provided ones sum to less than 1, the remainder is assigned to sets as decided by Vertex AI. If none of the fractions are set, by default roughly 80% of data is used for training, 10% for validation, and 10% for test.

Used in: InputDataConfig

message FulfillmentInput

evaluation_service.proto:510

Input for fulfillment metric.

Used in: EvaluateInstancesRequest

message FulfillmentInstance

evaluation_service.proto:519

Spec for fulfillment instance.

Used in: FulfillmentInput

message FulfillmentResult

evaluation_service.proto:534

Spec for fulfillment result.

Used in: EvaluateInstancesResponse

message FulfillmentSpec

evaluation_service.proto:528

Spec for fulfillment metric.

Used in: FulfillmentInput

message FunctionCall

tool.proto:130

A predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values.

Used in: Part

message FunctionCallingConfig

tool.proto:319

Function calling config.

Used in: ToolConfig

enum FunctionCallingConfig.Mode

tool.proto:321

Function calling mode.

Used in: FunctionCallingConfig

message FunctionDeclaration

tool.proto:94

Structured representation of a function declaration as defined by the [OpenAPI 3.0 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name, description, parameters and response type. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.

Used in: Tool

message FunctionResponse

tool.proto:144

The result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction.

Used in: Part

message GcsDestination

io.proto:52

The Google Cloud Storage location where the output is to be written to.

Used in: BatchPredictionJob.OutputConfig, CsvDestination, CustomJobSpec, ExportDataConfig, ExportModelRequest.OutputConfig, ImportRagFilesConfig, InputDataConfig, ModelDeploymentMonitoringJob, ModelMonitoringObjectiveConfig.ExplanationConfig.ExplanationBaseline, RebaseTunedModelRequest, TFRecordDestination

message GcsSource

io.proto:44

The Google Cloud Storage location for the input content.

Used in: AvroSource, BatchPredictionJob.InputConfig, CsvSource, Examples.ExampleGcsSource, ImportDataConfig, ImportRagFilesConfig, ModelMonitoringObjectiveConfig.TrainingDataset, RagFile

message GenerateContentRequest

prediction_service.proto:679

Request message for [PredictionService.GenerateContent].

Used as request type in: PredictionService.GenerateContent, PredictionService.StreamGenerateContent

message GenerateContentResponse

prediction_service.proto:747

Response message for [PredictionService.GenerateContent].

Used as response type in: PredictionService.GenerateContent, PredictionService.StreamGenerateContent

message GenerateContentResponse.PromptFeedback

prediction_service.proto:749

Content filter results for a prompt sent in the request.

Used in: GenerateContentResponse

enum GenerateContentResponse.PromptFeedback.BlockedReason

prediction_service.proto:751

Blocked reason enumeration.

Used in: PromptFeedback

message GenerateContentResponse.UsageMetadata

prediction_service.proto:781

Usage metadata about response(s).

Used in: GenerateContentResponse

message GenerationConfig

content.proto:172

Generation config.

Used in: CountTokensRequest, GenerateContentRequest

message GenerationConfig.RoutingConfig

content.proto:174

The configuration for routing the request to a specific model.

Used in: GenerationConfig

message GenerationConfig.RoutingConfig.AutoRoutingMode

content.proto:178

When automated routing is specified, the routing will be determined by the pretrained routing model and customer provided model routing preference.

Used in: RoutingConfig

enum GenerationConfig.RoutingConfig.AutoRoutingMode.ModelRoutingPreference

content.proto:180

The model routing preference.

Used in: AutoRoutingMode

message GenerationConfig.RoutingConfig.ManualRoutingMode

content.proto:199

When manual routing is set, the specified model will be used directly.

Used in: RoutingConfig

message GenerationConfig.ThinkingConfig

content.proto:216

Config for thinking features.

Used in: GenerationConfig

message GenericOperationMetadata

operation.proto:32

Generic Metadata shared by all operations.

Used in: AssignNotebookRuntimeOperationMetadata, BatchCancelPipelineJobsOperationMetadata, BatchCreateFeaturesOperationMetadata, BatchMigrateResourcesOperationMetadata, BatchReadFeatureValuesOperationMetadata, CheckTrialEarlyStoppingStateMetatdata, CopyModelOperationMetadata, CreateDatasetOperationMetadata, CreateDatasetVersionOperationMetadata, CreateDeploymentResourcePoolOperationMetadata, CreateEndpointOperationMetadata, CreateEntityTypeOperationMetadata, CreateFeatureGroupOperationMetadata, CreateFeatureOnlineStoreOperationMetadata, CreateFeatureOperationMetadata, CreateFeatureViewOperationMetadata, CreateFeaturestoreOperationMetadata, CreateIndexEndpointOperationMetadata, CreateIndexOperationMetadata, CreateMetadataStoreOperationMetadata, CreateNotebookExecutionJobOperationMetadata, CreateNotebookRuntimeTemplateOperationMetadata, CreatePersistentResourceOperationMetadata, CreateRagCorpusOperationMetadata, CreateReasoningEngineOperationMetadata, CreateRegistryFeatureOperationMetadata, CreateSpecialistPoolOperationMetadata, CreateTensorboardOperationMetadata, DeleteFeatureValuesOperationMetadata, DeleteMetadataStoreOperationMetadata, DeleteOperationMetadata, DeployIndexOperationMetadata, DeployModelOperationMetadata, ExportDataOperationMetadata, ExportFeatureValuesOperationMetadata, ExportModelOperationMetadata, ImportDataOperationMetadata, ImportFeatureValuesOperationMetadata, ImportRagFilesOperationMetadata, MutateDeployedIndexOperationMetadata, MutateDeployedModelOperationMetadata, PurgeArtifactsMetadata, PurgeContextsMetadata, PurgeExecutionsMetadata, RebaseTunedModelOperationMetadata, RebootPersistentResourceOperationMetadata, RestoreDatasetVersionOperationMetadata, StartNotebookRuntimeOperationMetadata, StopNotebookRuntimeOperationMetadata, SuggestTrialsMetadata, UndeployIndexOperationMetadata, UndeployModelOperationMetadata, UpdateDeploymentResourcePoolOperationMetadata, UpdateEndpointOperationMetadata, UpdateExplanationDatasetOperationMetadata, UpdateFeatureGroupOperationMetadata, UpdateFeatureOnlineStoreOperationMetadata, UpdateFeatureOperationMetadata, UpdateFeatureViewOperationMetadata, UpdateFeaturestoreOperationMetadata, UpdateIndexOperationMetadata, UpdateModelDeploymentMonitoringJobOperationMetadata, UpdatePersistentResourceOperationMetadata, UpdateRagCorpusOperationMetadata, UpdateReasoningEngineOperationMetadata, UpdateSpecialistPoolOperationMetadata, UpdateTensorboardOperationMetadata, UpgradeNotebookRuntimeOperationMetadata, UploadModelOperationMetadata

message GenieSource

model.proto:512

Contains information about the source of the models generated from Generative AI Studio.

Used in: Model.BaseModelSource

message GetFeatureRequest

featurestore_service.proto:1018

Request message for [FeaturestoreService.GetFeature][google.cloud.aiplatform.v1.FeaturestoreService.GetFeature]. Request message for [FeatureRegistryService.GetFeature][google.cloud.aiplatform.v1.FeatureRegistryService.GetFeature].

Used as request type in: FeatureRegistryService.GetFeature, FeaturestoreService.GetFeature

message GoogleDriveSource

io.proto:114

The Google Drive location for the input content.

Used in: ImportRagFilesConfig, RagFile

message GoogleDriveSource.ResourceId

io.proto:116

The type and ID of the Google Drive resource.

Used in: GoogleDriveSource

enum GoogleDriveSource.ResourceId.ResourceType

io.proto:118

The type of the Google Drive resource.

Used in: ResourceId

message GoogleSearchRetrieval

tool.proto:280

Tool to retrieve public web data for grounding, powered by Google.

Used in: Tool

message GroundednessInput

evaluation_service.proto:473

Input for groundedness metric.

Used in: EvaluateInstancesRequest

message GroundednessInstance

evaluation_service.proto:482

Spec for groundedness instance.

Used in: GroundednessInput

message GroundednessResult

evaluation_service.proto:498

Spec for groundedness result.

Used in: EvaluateInstancesResponse

message GroundednessSpec

evaluation_service.proto:492

Spec for groundedness metric.

Used in: GroundednessInput

message GroundingChunk

content.proto:543

Grounding chunk.

Used in: GroundingMetadata

message GroundingChunk.RetrievedContext

content.proto:554

Chunk from context retrieved by the retrieval tools.

Used in: GroundingChunk

message GroundingChunk.Web

content.proto:545

Chunk from the web.

Used in: GroundingChunk

message GroundingMetadata

content.proto:600

Metadata returned to client when grounding is enabled.

Used in: Candidate

message GroundingSupport

content.proto:583

Grounding support.

Used in: GroundingMetadata

enum HarmCategory

content.proto:35

Harm categories that will block the content.

Used in: SafetyRating, SafetySetting

message HyperparameterTuningJob

hyperparameter_tuning_job.proto:39

Represents a HyperparameterTuningJob. A HyperparameterTuningJob has a Study specification and multiple CustomJobs with identical CustomJob specification.

Used as response type in: JobService.CreateHyperparameterTuningJob, JobService.GetHyperparameterTuningJob

Used as field type in: CreateHyperparameterTuningJobRequest, ListHyperparameterTuningJobsResponse

message IdMatcher

feature_selector.proto:30

Matcher for Features of an EntityType by Feature ID.

Used in: FeatureSelector

message ImportDataConfig

dataset.proto:131

Describes the location from where we import data into a Dataset, together with the labels that will be applied to the DataItems and the Annotations.

Used in: ImportDataRequest

message ImportDataOperationMetadata

dataset_service.proto:441

Runtime operation information for [DatasetService.ImportData][google.cloud.aiplatform.v1.DatasetService.ImportData].

message ImportDataResponse

dataset_service.proto:437

Response message for [DatasetService.ImportData][google.cloud.aiplatform.v1.DatasetService.ImportData].

(message has no fields)

message ImportFeatureValuesOperationMetadata

featurestore_service.proto:1299

Details of operations that perform import Feature values.

message ImportFeatureValuesRequest.FeatureSpec

featurestore_service.proto:494

Defines the Feature value(s) to import.

Used in: ImportFeatureValuesRequest

message ImportFeatureValuesResponse

featurestore_service.proto:565

Response message for [FeaturestoreService.ImportFeatureValues][google.cloud.aiplatform.v1.FeaturestoreService.ImportFeatureValues].

message ImportRagFilesConfig

vertex_rag_data.proto:386

Config for importing RagFiles.

Used in: ImportRagFilesOperationMetadata, ImportRagFilesRequest

message ImportRagFilesOperationMetadata

vertex_rag_data_service.proto:407

Runtime operation information for [VertexRagDataService.ImportRagFiles][google.cloud.aiplatform.v1.VertexRagDataService.ImportRagFiles].

message ImportRagFilesResponse

vertex_rag_data_service.proto:297

Response message for [VertexRagDataService.ImportRagFiles][google.cloud.aiplatform.v1.VertexRagDataService.ImportRagFiles].

message Index

index.proto:36

A representation of a collection of database items organized in a way that allows for approximate nearest neighbor (a.k.a ANN) algorithms search.

Used as response type in: IndexService.GetIndex

Used as field type in: CreateIndexRequest, ListIndexesResponse, UpdateIndexRequest

enum Index.IndexUpdateMethod

index.proto:43

The update method of an Index.

Used in: Index

message IndexDatapoint

index.proto:137

A datapoint of Index.

Used in: FindNeighborsRequest.Query, FindNeighborsResponse.Neighbor, ReadIndexDatapointsResponse, UpsertDatapointsRequest

message IndexDatapoint.CrowdingTag

index.proto:217

Crowding tag is a constraint on a neighbor list produced by nearest neighbor search requiring that no more than some value k' of the k neighbors returned have the same value of crowding_attribute.

Used in: IndexDatapoint

message IndexDatapoint.NumericRestriction

index.proto:164

This field allows restricts to be based on numeric comparisons rather than categorical tokens.

Used in: IndexDatapoint

enum IndexDatapoint.NumericRestriction.Operator

index.proto:170

Which comparison operator to use. Should be specified for queries only; specifying this for a datapoint is an error. Datapoints for which Operator is true relative to the query's Value field will be allowlisted.

Used in: NumericRestriction

message IndexDatapoint.Restriction

index.proto:151

Restriction of a datapoint which describe its attributes(tokens) from each of several attribute categories(namespaces).

Used in: IndexDatapoint

message IndexDatapoint.SparseEmbedding

index.proto:140

Feature embedding vector for sparse index. An array of numbers whose values are located in the specified dimensions.

Used in: IndexDatapoint

message IndexEndpoint

index_endpoint.proto:36

Indexes are deployed into it. An IndexEndpoint can have multiple DeployedIndexes.

Used as response type in: IndexEndpointService.GetIndexEndpoint, IndexEndpointService.UpdateIndexEndpoint

Used as field type in: CreateIndexEndpointRequest, ListIndexEndpointsResponse, UpdateIndexEndpointRequest

message IndexPrivateEndpoints

index_endpoint.proto:299

IndexPrivateEndpoints proto is used to provide paths for users to send requests via private endpoints (e.g. private service access, private service connect). To send request via private service access, use match_grpc_address. To send request via private service connect, use service_attachment.

Used in: DeployedIndex

message IndexStats

index.proto:254

Stats of the Index.

Used in: Index

message InputDataConfig

training_pipeline.proto:168

Specifies Vertex AI owned input data to be used for training, and possibly evaluating, the Model.

Used in: TrainingPipeline

message Int64Array

types.proto:40

A list of int64 values.

Used in: FeatureValue

message IntegratedGradientsAttribution

explanation.proto:294

An attribution method that computes the Aumann-Shapley value taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1703.01365

Used in: ExplanationParameters

message JiraSource

io.proto:177

The Jira source for the ImportRagFilesRequest.

Used in: ImportRagFilesConfig, RagFile

message JiraSource.JiraQueries

io.proto:179

JiraQueries contains the Jira queries and corresponding authentication.

Used in: JiraSource

enum JobState

job_state.proto:28

Describes the state of a job.

Used in: BatchPredictionJob, CustomJob, DataLabelingJob, HyperparameterTuningJob, ModelDeploymentMonitoringJob, NasJob, NotebookExecutionJob, TuningJob

message LargeModelReference

model.proto:490

Contains information about the Large Model.

Used in: PublisherModel.CallToAction.Deploy

message LineageSubgraph

lineage_subgraph.proto:33

A subgraph of the overall lineage graph. Event edges connect Artifact and Execution nodes.

Used as response type in: MetadataService.QueryArtifactLineageSubgraph, MetadataService.QueryContextLineageSubgraph, MetadataService.QueryExecutionInputsAndOutputs

message ListFeaturesRequest

featurestore_service.proto:1036

Request message for [FeaturestoreService.ListFeatures][google.cloud.aiplatform.v1.FeaturestoreService.ListFeatures]. Request message for [FeatureRegistryService.ListFeatures][google.cloud.aiplatform.v1.FeatureRegistryService.ListFeatures].

Used as request type in: FeatureRegistryService.ListFeatures, FeaturestoreService.ListFeatures

message ListFeaturesResponse

featurestore_service.proto:1117

Response message for [FeaturestoreService.ListFeatures][google.cloud.aiplatform.v1.FeaturestoreService.ListFeatures]. Response message for [FeatureRegistryService.ListFeatures][google.cloud.aiplatform.v1.FeatureRegistryService.ListFeatures].

Used as response type in: FeatureRegistryService.ListFeatures, FeaturestoreService.ListFeatures

message LogprobsResult

content.proto:498

Logprobs Result

Used in: Candidate

message LogprobsResult.Candidate

content.proto:500

Candidate for the logprobs token and score.

Used in: LogprobsResult, TopCandidates

message LogprobsResult.TopCandidates

content.proto:512

Candidates with top log probabilities at each decoding step.

Used in: LogprobsResult

message MachineSpec

machine_resources.proto:32

Specification of a single machine.

Used in: BatchDedicatedResources, DedicatedResources, NotebookExecutionJob.CustomEnvironmentSpec, NotebookRuntime, NotebookRuntimeTemplate, ResourcePool, WorkerPoolSpec

message ManualBatchTuningParameters

manual_batch_tuning_parameters.proto:30

Manual batch tuning parameters.

Used in: BatchPredictionJob

message Measurement

study.proto:668

A message representing a Measurement of a Trial. A Measurement contains the Metrics got by executing a Trial using suggested hyperparameter values.

Used in: AddTrialMeasurementRequest, CompleteTrialRequest, NasTrial, Trial

message Measurement.Metric

study.proto:670

A message representing a metric in the measurement.

Used in: Measurement

message MetadataSchema

metadata_schema.proto:32

Instance of a general MetadataSchema.

Used as response type in: MetadataService.CreateMetadataSchema, MetadataService.GetMetadataSchema

Used as field type in: CreateMetadataSchemaRequest, ListMetadataSchemasResponse

enum MetadataSchema.MetadataSchemaType

metadata_schema.proto:39

Describes the type of the MetadataSchema.

Used in: MetadataSchema

message MetadataStore

metadata_store.proto:34

Instance of a metadata store. Contains a set of metadata that can be queried.

Used as response type in: MetadataService.GetMetadataStore

Used as field type in: CreateMetadataStoreRequest, ListMetadataStoresResponse

message MetadataStore.DataplexConfig

metadata_store.proto:47

Represents Dataplex integration settings.

Used in: MetadataStore

message MetadataStore.MetadataStoreState

metadata_store.proto:41

Represents state information for a MetadataStore.

Used in: MetadataStore

message MetricxInput

evaluation_service.proto:1262

Input for MetricX metric.

Used in: EvaluateInstancesRequest

message MetricxInstance

evaluation_service.proto:1301

Spec for MetricX instance - The fields used for evaluation are dependent on the MetricX version.

Used in: MetricxInput

message MetricxResult

evaluation_service.proto:1314

Spec for MetricX result - calculates the MetricX score for the given instance using the version specified in the spec.

Used in: EvaluateInstancesResponse

message MetricxSpec

evaluation_service.proto:1271

Spec for MetricX metric.

Used in: MetricxInput

enum MetricxSpec.MetricxVersion

evaluation_service.proto:1273

MetricX Version options.

Used in: MetricxSpec

message MigratableResource

migratable_resource.proto:53

Represents one resource that exists in automl.googleapis.com, datalabeling.googleapis.com or ml.googleapis.com.

Used in: MigrateResourceResponse, SearchMigratableResourcesResponse

message MigratableResource.AutomlDataset

migratable_resource.proto:87

Represents one Dataset in automl.googleapis.com.

Used in: MigratableResource

message MigratableResource.AutomlModel

migratable_resource.proto:74

Represents one Model in automl.googleapis.com.

Used in: MigratableResource

message MigratableResource.DataLabelingDataset

migratable_resource.proto:100

Represents one Dataset in datalabeling.googleapis.com.

Used in: MigratableResource

message MigratableResource.DataLabelingDataset.DataLabelingAnnotatedDataset

migratable_resource.proto:102

Represents one AnnotatedDataset in datalabeling.googleapis.com.

Used in: DataLabelingDataset

message MigratableResource.MlEngineModelVersion

migratable_resource.proto:55

Represents one model Version in ml.googleapis.com.

Used in: MigratableResource

message MigrateResourceRequest

migration_service.proto:141

Config of migrating one resource from automl.googleapis.com, datalabeling.googleapis.com and ml.googleapis.com to Vertex AI.

Used in: BatchMigrateResourcesOperationMetadata.PartialResult, BatchMigrateResourcesRequest

message MigrateResourceRequest.MigrateAutomlDatasetConfig

migration_service.proto:185

Config for migrating Dataset in automl.googleapis.com to Vertex AI's Dataset.

Used in: MigrateResourceRequest

message MigrateResourceRequest.MigrateAutomlModelConfig

migration_service.proto:169

Config for migrating Model in automl.googleapis.com to Vertex AI's Model.

Used in: MigrateResourceRequest

message MigrateResourceRequest.MigrateDataLabelingDatasetConfig

migration_service.proto:203

Config for migrating Dataset in datalabeling.googleapis.com to Vertex AI's Dataset.

Used in: MigrateResourceRequest

message MigrateResourceRequest.MigrateDataLabelingDatasetConfig.MigrateDataLabelingAnnotatedDatasetConfig

migration_service.proto:206

Config for migrating AnnotatedDataset in datalabeling.googleapis.com to Vertex AI's SavedQuery.

Used in: MigrateDataLabelingDatasetConfig

message MigrateResourceRequest.MigrateMlEngineModelVersionConfig

migration_service.proto:143

Config for migrating version in ml.googleapis.com to Vertex AI's Model.

Used in: MigrateResourceRequest

message MigrateResourceResponse

migration_service.proto:267

Describes a successfully migrated resource.

Used in: BatchMigrateResourcesResponse

enum Modality

content.proto:57

Content Part modality

Used in: ModalityTokenCount

message ModalityTokenCount

content.proto:646

Represents token counting info for a single modality.

Used in: CountTokensResponse, GenerateContentResponse.UsageMetadata

message Model

model.proto:38

A trained machine learning Model.

Used as response type in: ModelService.GetModel, ModelService.MergeVersionAliases, ModelService.UpdateModel

Used as field type in: ListModelVersionsResponse, ListModelsResponse, TrainingPipeline, UpdateModelRequest, UploadModelRequest

message Model.BaseModelSource

model.proto:138

User input field to specify the base model source. Currently it only supports specifing the Model Garden models and Genie models.

Used in: Model

message Model.DataStats

model.proto:95

Stats of data used for train or evaluate the Model.

Used in: ExportDataResponse, Model

enum Model.DeploymentResourcesType

model.proto:149

Identifies a type of Model's prediction resources.

Used in: Model

message Model.ExportFormat

model.proto:46

Represents export format supported by the Model. All formats export to Google Cloud Storage.

Used in: Model

enum Model.ExportFormat.ExportableContent

model.proto:48

The Model content that can be exported.

Used in: ExportFormat

message Model.OriginalModelInfo

model.proto:124

Contains information about the original Model if this Model is a copy.

Used in: Model

message ModelContainerSpec

model.proto:570

Specification of a container for serving predictions. Some fields in this message correspond to fields in the [Kubernetes Container v1 core specification](https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#container-v1-core).

Used in: Model, PublisherModel.CallToAction.Deploy, UnmanagedContainerModel

message ModelDeploymentMonitoringBigQueryTable

model_deployment_monitoring_job.proto:237

ModelDeploymentMonitoringBigQueryTable specifies the BigQuery table name as well as some information of the logs stored in this table.

Used in: ModelDeploymentMonitoringJob

enum ModelDeploymentMonitoringBigQueryTable.LogSource

model_deployment_monitoring_job.proto:239

Indicates where does the log come from.

Used in: ModelDeploymentMonitoringBigQueryTable

enum ModelDeploymentMonitoringBigQueryTable.LogType

model_deployment_monitoring_job.proto:251

Indicates what type of traffic does the log belong to.

Used in: ModelDeploymentMonitoringBigQueryTable

message ModelDeploymentMonitoringJob

model_deployment_monitoring_job.proto:64

Represents a job that runs periodically to monitor the deployed models in an endpoint. It will analyze the logged training & prediction data to detect any abnormal behaviors.

Used as response type in: JobService.CreateModelDeploymentMonitoringJob, JobService.GetModelDeploymentMonitoringJob

Used as field type in: CreateModelDeploymentMonitoringJobRequest, ListModelDeploymentMonitoringJobsResponse, UpdateModelDeploymentMonitoringJobRequest

message ModelDeploymentMonitoringJob.LatestMonitoringPipelineMetadata

model_deployment_monitoring_job.proto:71

All metadata of most recent monitoring pipelines.

Used in: ModelDeploymentMonitoringJob

enum ModelDeploymentMonitoringJob.MonitoringScheduleState

model_deployment_monitoring_job.proto:81

The state to Specify the monitoring pipeline.

Used in: ModelDeploymentMonitoringJob

message ModelDeploymentMonitoringObjectiveConfig

model_deployment_monitoring_job.proto:281

ModelDeploymentMonitoringObjectiveConfig contains the pair of deployed_model_id to ModelMonitoringObjectiveConfig.

Used in: ModelDeploymentMonitoringJob

enum ModelDeploymentMonitoringObjectiveType

model_deployment_monitoring_job.proto:40

The Model Monitoring Objective types.

Used in: ModelMonitoringStatsAnomalies, SearchModelDeploymentMonitoringStatsAnomaliesRequest.StatsAnomaliesObjective

message ModelDeploymentMonitoringScheduleConfig

model_deployment_monitoring_job.proto:290

The config for scheduling monitoring job.

Used in: ModelDeploymentMonitoringJob

message ModelEvaluation

model_evaluation.proto:35

A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.

Used as response type in: ModelService.GetModelEvaluation, ModelService.ImportModelEvaluation

Used as field type in: ImportModelEvaluationRequest, ListModelEvaluationsResponse

message ModelEvaluation.ModelEvaluationExplanationSpec

model_evaluation.proto:41

Used in: ModelEvaluation

message ModelEvaluationSlice

model_evaluation_slice.proto:36

A collection of metrics calculated by comparing Model's predictions on a slice of the test data against ground truth annotations.

Used as response type in: ModelService.GetModelEvaluationSlice

Used as field type in: BatchImportModelEvaluationSlicesRequest, ListModelEvaluationSlicesResponse

message ModelEvaluationSlice.Slice

model_evaluation_slice.proto:43

Definition of a slice.

Used in: ModelEvaluationSlice

message ModelEvaluationSlice.Slice.SliceSpec

model_evaluation_slice.proto:45

Specification for how the data should be sliced.

Used in: Slice

message ModelEvaluationSlice.Slice.SliceSpec.Range

model_evaluation_slice.proto:121

A range of values for slice(s). `low` is inclusive, `high` is exclusive.

Used in: SliceConfig

message ModelEvaluationSlice.Slice.SliceSpec.SliceConfig

model_evaluation_slice.proto:101

Specification message containing the config for this SliceSpec. When `kind` is selected as `value` and/or `range`, only a single slice will be computed. When `all_values` is present, a separate slice will be computed for each possible label/value for the corresponding key in `config`. Examples, with feature zip_code with values 12345, 23334, 88888 and feature country with values "US", "Canada", "Mexico" in the dataset: Example 1: { "zip_code": { "value": { "float_value": 12345.0 } } } A single slice for any data with zip_code 12345 in the dataset. Example 2: { "zip_code": { "range": { "low": 12345, "high": 20000 } } } A single slice containing data where the zip_codes between 12345 and 20000 For this example, data with the zip_code of 12345 will be in this slice. Example 3: { "zip_code": { "range": { "low": 10000, "high": 20000 } }, "country": { "value": { "string_value": "US" } } } A single slice containing data where the zip_codes between 10000 and 20000 has the country "US". For this example, data with the zip_code of 12345 and country "US" will be in this slice. Example 4: { "country": {"all_values": { "value": true } } } Three slices are computed, one for each unique country in the dataset. Example 5: { "country": { "all_values": { "value": true } }, "zip_code": { "value": { "float_value": 12345.0 } } } Three slices are computed, one for each unique country in the dataset where the zip_code is also 12345. For this example, data with zip_code 12345 and country "US" will be in one slice, zip_code 12345 and country "Canada" in another slice, and zip_code 12345 and country "Mexico" in another slice, totaling 3 slices.

Used in: SliceSpec

message ModelEvaluationSlice.Slice.SliceSpec.Value

model_evaluation_slice.proto:130

Single value that supports strings and floats.

Used in: SliceConfig

message ModelExplanation

explanation.proto:75

Aggregated explanation metrics for a Model over a set of instances.

Used in: ModelEvaluation, ModelEvaluationSlice

message ModelGardenSource

model.proto:499

Contains information about the source of the models generated from Model Garden.

Used in: Model.BaseModelSource

message ModelMonitoringAlertConfig

model_monitoring.proto:174

The alert config for model monitoring.

Used in: ModelDeploymentMonitoringJob

message ModelMonitoringAlertConfig.EmailAlertConfig

model_monitoring.proto:176

The config for email alert.

Used in: ModelMonitoringAlertConfig

message ModelMonitoringObjectiveConfig

model_monitoring.proto:36

The objective configuration for model monitoring, including the information needed to detect anomalies for one particular model.

Used in: ModelDeploymentMonitoringObjectiveConfig

message ModelMonitoringObjectiveConfig.ExplanationConfig

model_monitoring.proto:117

The config for integrating with Vertex Explainable AI. Only applicable if the Model has explanation_spec populated.

Used in: ModelMonitoringObjectiveConfig

message ModelMonitoringObjectiveConfig.ExplanationConfig.ExplanationBaseline

model_monitoring.proto:122

Output from [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob] for Model Monitoring baseline dataset, which can be used to generate baseline attribution scores.

Used in: ExplanationConfig

enum ModelMonitoringObjectiveConfig.ExplanationConfig.ExplanationBaseline.PredictionFormat

model_monitoring.proto:124

The storage format of the predictions generated BatchPrediction job.

Used in: ExplanationBaseline

message ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig

model_monitoring.proto:98

The config for Prediction data drift detection.

Used in: ModelMonitoringObjectiveConfig

message ModelMonitoringObjectiveConfig.TrainingDataset

model_monitoring.proto:38

Training Dataset information.

Used in: ModelMonitoringObjectiveConfig

message ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig

model_monitoring.proto:79

The config for Training & Prediction data skew detection. It specifies the training dataset sources and the skew detection parameters.

Used in: ModelMonitoringObjectiveConfig

message ModelMonitoringStatsAnomalies

model_deployment_monitoring_job.proto:309

Statistics and anomalies generated by Model Monitoring.

Used in: SearchModelDeploymentMonitoringStatsAnomaliesResponse

message ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies

model_deployment_monitoring_job.proto:311

Historical Stats (and Anomalies) for a specific Feature.

Used in: ModelMonitoringStatsAnomalies

message ModelSourceInfo

model.proto:823

Detail description of the source information of the model.

Used in: Model

enum ModelSourceInfo.ModelSourceType

model.proto:829

Source of the model. Different from `objective` field, this `ModelSourceType` enum indicates the source from which the model was accessed or obtained, whereas the `objective` indicates the overall aim or function of this model.

Used in: ModelSourceInfo

message ModelVersionCheckpoint

model_service.proto:514

A proto representation of a Spanner-stored ModelVersionCheckpoint. The meaning of the fields is equivalent to their in-Spanner counterparts.

Used in: ListModelVersionCheckpointsResponse

message MutateDeployedIndexOperationMetadata

index_endpoint_service.proto:358

Runtime operation information for [IndexEndpointService.MutateDeployedIndex][google.cloud.aiplatform.v1.IndexEndpointService.MutateDeployedIndex].

message MutateDeployedIndexResponse

index_endpoint_service.proto:351

Response message for [IndexEndpointService.MutateDeployedIndex][google.cloud.aiplatform.v1.IndexEndpointService.MutateDeployedIndex].

message MutateDeployedModelOperationMetadata

endpoint_service.proto:456

Runtime operation information for [EndpointService.MutateDeployedModel][google.cloud.aiplatform.v1.EndpointService.MutateDeployedModel].

message MutateDeployedModelResponse

endpoint_service.proto:449

Response message for [EndpointService.MutateDeployedModel][google.cloud.aiplatform.v1.EndpointService.MutateDeployedModel].

message NasJob

nas_job.proto:37

Represents a Neural Architecture Search (NAS) job.

Used as response type in: JobService.CreateNasJob, JobService.GetNasJob

Used as field type in: CreateNasJobRequest, ListNasJobsResponse

message NasJobOutput

nas_job.proto:248

Represents a uCAIP NasJob output.

Used in: NasJob

message NasJobOutput.MultiTrialJobOutput

nas_job.proto:250

The output of a multi-trial Neural Architecture Search (NAS) jobs.

Used in: NasJobOutput

message NasJobSpec

nas_job.proto:134

Represents the spec of a NasJob.

Used in: NasJob

message NasJobSpec.MultiTrialAlgorithmSpec

nas_job.proto:136

The spec of multi-trial Neural Architecture Search (NAS).

Used in: NasJobSpec

message NasJobSpec.MultiTrialAlgorithmSpec.MetricSpec

nas_job.proto:138

Represents a metric to optimize.

Used in: MultiTrialAlgorithmSpec

enum NasJobSpec.MultiTrialAlgorithmSpec.MetricSpec.GoalType

nas_job.proto:140

The available types of optimization goals.

Used in: MetricSpec

enum NasJobSpec.MultiTrialAlgorithmSpec.MultiTrialAlgorithm

nas_job.proto:200

The available types of multi-trial algorithms.

Used in: MultiTrialAlgorithmSpec

message NasJobSpec.MultiTrialAlgorithmSpec.SearchTrialSpec

nas_job.proto:159

Represent spec for search trials.

Used in: MultiTrialAlgorithmSpec

message NasJobSpec.MultiTrialAlgorithmSpec.TrainTrialSpec

nas_job.proto:182

Represent spec for train trials.

Used in: MultiTrialAlgorithmSpec

message NasTrial

nas_job.proto:270

Represents a uCAIP NasJob trial.

Used in: NasJobOutput.MultiTrialJobOutput, NasTrialDetail

enum NasTrial.State

nas_job.proto:272

Describes a NasTrial state.

Used in: NasTrial

message NasTrialDetail

nas_job.proto:110

Represents a NasTrial details along with its parameters. If there is a corresponding train NasTrial, the train NasTrial is also returned.

Used as response type in: JobService.GetNasTrialDetail

Used as field type in: ListNasTrialDetailsResponse

message NearestNeighborQuery

feature_online_store_service.proto:152

A query to find a number of similar entities.

Used in: SearchNearestEntitiesRequest

message NearestNeighborQuery.Embedding

feature_online_store_service.proto:154

The embedding vector.

Used in: NearestNeighborQuery

message NearestNeighborQuery.NumericFilter

feature_online_store_service.proto:186

Numeric filter is used to search a subset of the entities by using boolean rules on numeric columns. For example: Database Point 0: {name: "a" value_int: 42} {name: "b" value_float: 1.0} Database Point 1: {name: "a" value_int: 10} {name: "b" value_float: 2.0} Database Point 2: {name: "a" value_int: -1} {name: "b" value_float: 3.0} Query: {name: "a" value_int: 12 operator: LESS} // Matches Point 1, 2 {name: "b" value_float: 2.0 operator: EQUAL} // Matches Point 1

Used in: NearestNeighborQuery

enum NearestNeighborQuery.NumericFilter.Operator

feature_online_store_service.proto:189

Datapoints for which Operator is true relative to the query's Value field will be allowlisted.

Used in: NumericFilter

message NearestNeighborQuery.Parameters

feature_online_store_service.proto:235

Parameters that can be overrided in each query to tune query latency and recall.

Used in: NearestNeighborQuery

message NearestNeighborQuery.StringFilter

feature_online_store_service.proto:167

String filter is used to search a subset of the entities by using boolean rules on string columns. For example: if a query specifies string filter with 'name = color, allow_tokens = {red, blue}, deny_tokens = {purple}',' then that query will match entities that are red or blue, but if those points are also purple, then they will be excluded even if they are red/blue. Only string filter is supported for now, numeric filter will be supported in the near future.

Used in: NearestNeighborQuery

message NearestNeighborSearchOperationMetadata

index_service.proto:293

Runtime operation metadata with regard to Matching Engine Index.

Used in: CreateIndexOperationMetadata, UpdateIndexOperationMetadata

message NearestNeighborSearchOperationMetadata.ContentValidationStats

index_service.proto:371

Used in: NearestNeighborSearchOperationMetadata

message NearestNeighborSearchOperationMetadata.RecordError

index_service.proto:294

Used in: ContentValidationStats

enum NearestNeighborSearchOperationMetadata.RecordError.RecordErrorType

index_service.proto:295

Used in: RecordError

message NearestNeighbors

feature_online_store_service.proto:306

Nearest neighbors for one query.

Used in: SearchNearestEntitiesResponse

message NearestNeighbors.Neighbor

feature_online_store_service.proto:308

A neighbor of the query vector.

Used in: NearestNeighbors

message Neighbor

explanation.proto:212

Neighbors for example-based explanations.

Used in: Explanation

message NetworkSpec

network_spec.proto:34

Network spec.

Used in: NotebookExecutionJob.CustomEnvironmentSpec, NotebookRuntime, NotebookRuntimeTemplate

message NfsMount

machine_resources.proto:223

Represents a mount configuration for Network File System (NFS) to mount.

Used in: WorkerPoolSpec

message NotebookEucConfig

notebook_euc_config.proto:30

The euc configuration of NotebookRuntimeTemplate.

Used in: NotebookRuntime, NotebookRuntimeTemplate

message NotebookExecutionJob

notebook_execution_job.proto:38

NotebookExecutionJob represents an instance of a notebook execution.

Used as response type in: NotebookService.GetNotebookExecutionJob

Used as field type in: CreateNotebookExecutionJobRequest, ListNotebookExecutionJobsResponse

message NotebookExecutionJob.CustomEnvironmentSpec

notebook_execution_job.proto:76

Compute configuration to use for an execution job.

Used in: NotebookExecutionJob

message NotebookExecutionJob.DataformRepositorySource

notebook_execution_job.proto:47

The Dataform Repository containing the input notebook.

Used in: NotebookExecutionJob

message NotebookExecutionJob.DirectNotebookSource

notebook_execution_job.proto:70

The content of the input notebook in ipynb format.

Used in: NotebookExecutionJob

message NotebookExecutionJob.GcsNotebookSource

notebook_execution_job.proto:58

The Cloud Storage uri for the input notebook.

Used in: NotebookExecutionJob

message NotebookExecutionJob.WorkbenchRuntime

notebook_execution_job.proto:88

Configuration for a Workbench Instances-based environment.

Used in: NotebookExecutionJob

(message has no fields)

enum NotebookExecutionJobView

notebook_service.proto:237

Views for Get/List NotebookExecutionJob

Used in: GetNotebookExecutionJobRequest, ListNotebookExecutionJobsRequest

message NotebookIdleShutdownConfig

notebook_idle_shutdown_config.proto:32

The idle shutdown configuration of NotebookRuntimeTemplate, which contains the idle_timeout as required field.

Used in: NotebookRuntime, NotebookRuntimeTemplate

message NotebookRuntime

notebook_runtime.proto:169

A runtime is a virtual machine allocated to a particular user for a particular Notebook file on temporary basis with lifetime limited to 24 hours.

Used as response type in: NotebookService.GetNotebookRuntime

Used as field type in: AssignNotebookRuntimeRequest, ListNotebookRuntimesResponse

enum NotebookRuntime.HealthState

notebook_runtime.proto:176

The substate of the NotebookRuntime to display health information.

Used in: NotebookRuntime

enum NotebookRuntime.RuntimeState

notebook_runtime.proto:189

The substate of the NotebookRuntime to display state of runtime. The resource of NotebookRuntime is in ACTIVE state for these sub state.

Used in: NotebookRuntime

message NotebookRuntimeTemplate

notebook_runtime.proto:54

A template that specifies runtime configurations such as machine type, runtime version, network configurations, etc. Multiple runtimes can be created from a runtime template.

Used as response type in: NotebookService.GetNotebookRuntimeTemplate, NotebookService.UpdateNotebookRuntimeTemplate

Used as field type in: CreateNotebookRuntimeTemplateRequest, ListNotebookRuntimeTemplatesResponse, UpdateNotebookRuntimeTemplateRequest

message NotebookRuntimeTemplateRef

notebook_runtime_template_ref.proto:31

Points to a NotebookRuntimeTemplateRef.

Used in: NotebookRuntime

enum NotebookRuntimeType

notebook_runtime.proto:39

Represents a notebook runtime type.

Used in: NotebookRuntime, NotebookRuntimeTemplate

message NotebookSoftwareConfig

notebook_software_config.proto:61

Notebook Software Config.

Used in: NotebookRuntime, NotebookRuntimeTemplate

message PSCAutomationConfig

service_networking.proto:36

PSC config that is used to automatically create forwarding rule via ServiceConnectionMap.

Used in: DeployedIndex

enum PairwiseChoice

evaluation_service.proto:49

Pairwise prediction autorater preference.

Used in: PairwiseMetricResult, PairwiseQuestionAnsweringQualityResult, PairwiseSummarizationQualityResult

message PairwiseMetricInput

evaluation_service.proto:1025

Input for pairwise metric.

Used in: EvaluateInstancesRequest

message PairwiseMetricInstance

evaluation_service.proto:1035

Pairwise metric instance. Usually one instance corresponds to one row in an evaluation dataset.

Used in: PairwiseMetricInput

message PairwiseMetricResult

evaluation_service.proto:1053

Spec for pairwise metric result.

Used in: EvaluateInstancesResponse

message PairwiseMetricSpec

evaluation_service.proto:1046

Spec for pairwise metric.

Used in: PairwiseMetricInput

message PairwiseQuestionAnsweringQualityInput

evaluation_service.proto:791

Input for pairwise question answering quality metric.

Used in: EvaluateInstancesRequest

message PairwiseQuestionAnsweringQualityInstance

evaluation_service.proto:802

Spec for pairwise question answering quality instance.

Used in: PairwiseQuestionAnsweringQualityInput

message PairwiseQuestionAnsweringQualityResult

evaluation_service.proto:831

Spec for pairwise question answering quality result.

Used in: EvaluateInstancesResponse

message PairwiseQuestionAnsweringQualitySpec

evaluation_service.proto:821

Spec for pairwise question answering quality score metric.

Used in: PairwiseQuestionAnsweringQualityInput

message PairwiseSummarizationQualityInput

evaluation_service.proto:594

Input for pairwise summarization quality metric.

Used in: EvaluateInstancesRequest

message PairwiseSummarizationQualityInstance

evaluation_service.proto:605

Spec for pairwise summarization quality instance.

Used in: PairwiseSummarizationQualityInput

message PairwiseSummarizationQualityResult

evaluation_service.proto:634

Spec for pairwise summarization quality result.

Used in: EvaluateInstancesResponse

message PairwiseSummarizationQualitySpec

evaluation_service.proto:624

Spec for pairwise summarization quality score metric.

Used in: PairwiseSummarizationQualityInput

message Part

content.proto:101

A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if `inline_data` or `file_data` field is filled with raw bytes.

Used in: Content

message PersistentDiskSpec

machine_resources.proto:210

Represents the spec of [persistent disk][https://cloud.google.com/compute/docs/disks/persistent-disks] options.

Used in: NotebookExecutionJob.CustomEnvironmentSpec, NotebookRuntime, NotebookRuntimeTemplate

message PersistentResource

persistent_resource.proto:38

Represents long-lasting resources that are dedicated to users to runs custom workloads. A PersistentResource can have multiple node pools and each node pool can have its own machine spec.

Used as response type in: PersistentResourceService.GetPersistentResource

Used as field type in: CreatePersistentResourceRequest, ListPersistentResourcesResponse, UpdatePersistentResourceRequest

enum PersistentResource.State

persistent_resource.proto:45

Describes the PersistentResource state.

Used in: PersistentResource

enum PipelineFailurePolicy

pipeline_failure_policy.proto:33

Represents the failure policy of a pipeline. Currently, the default of a pipeline is that the pipeline will continue to run until no more tasks can be executed, also known as PIPELINE_FAILURE_POLICY_FAIL_SLOW. However, if a pipeline is set to PIPELINE_FAILURE_POLICY_FAIL_FAST, it will stop scheduling any new tasks when a task has failed. Any scheduled tasks will continue to completion.

Used in: PipelineJob.RuntimeConfig

message PipelineJob

pipeline_job.proto:45

An instance of a machine learning PipelineJob.

Used as response type in: PipelineService.CreatePipelineJob, PipelineService.GetPipelineJob

Used as field type in: BatchCancelPipelineJobsResponse, BatchDeletePipelineJobsResponse, CreatePipelineJobRequest, ListPipelineJobsResponse

message PipelineJob.RuntimeConfig

pipeline_job.proto:52

The runtime config of a PipelineJob.

Used in: PipelineJob

message PipelineJob.RuntimeConfig.InputArtifact

pipeline_job.proto:54

The type of an input artifact.

Used in: RuntimeConfig

message PipelineJobDetail

pipeline_job.proto:237

The runtime detail of PipelineJob.

Used in: PipelineJob

enum PipelineState

pipeline_state.proto:28

Describes the state of a pipeline.

Used in: PipelineJob, TrainingPipeline

message PipelineTaskDetail

pipeline_job.proto:250

The runtime detail of a task execution.

Used in: PipelineJobDetail

message PipelineTaskDetail.ArtifactList

pipeline_job.proto:269

A list of artifact metadata.

Used in: PipelineTaskDetail

message PipelineTaskDetail.PipelineTaskStatus

pipeline_job.proto:252

A single record of the task status.

Used in: PipelineTaskDetail

enum PipelineTaskDetail.State

pipeline_job.proto:275

Specifies state of TaskExecution

Used in: PipelineTaskDetail, PipelineTaskStatus

message PipelineTaskExecutorDetail

pipeline_job.proto:362

The runtime detail of a pipeline executor.

Used in: PipelineTaskDetail

message PipelineTaskExecutorDetail.ContainerDetail

pipeline_job.proto:365

The detail of a container execution. It contains the job names of the lifecycle of a container execution.

Used in: PipelineTaskExecutorDetail

message PipelineTaskExecutorDetail.CustomJobDetail

pipeline_job.proto:405

The detailed info for a custom job executor.

Used in: PipelineTaskExecutorDetail

message PipelineTemplateMetadata

pipeline_job.proto:225

Pipeline template metadata if [PipelineJob.template_uri][google.cloud.aiplatform.v1.PipelineJob.template_uri] is from supported template registry. Currently, the only supported registry is Artifact Registry.

Used in: PipelineJob

message PointwiseMetricInput

evaluation_service.proto:988

Input for pointwise metric.

Used in: EvaluateInstancesRequest

message PointwiseMetricInstance

evaluation_service.proto:998

Pointwise metric instance. Usually one instance corresponds to one row in an evaluation dataset.

Used in: PointwiseMetricInput

message PointwiseMetricResult

evaluation_service.proto:1016

Spec for pointwise metric result.

Used in: EvaluateInstancesResponse

message PointwiseMetricSpec

evaluation_service.proto:1009

Spec for pointwise metric.

Used in: PointwiseMetricInput

message Port

model.proto:816

Represents a network port in a container.

Used in: ModelContainerSpec

message PostStartupScriptConfig

notebook_software_config.proto:31

Post startup script config.

Used in: NotebookSoftwareConfig

enum PostStartupScriptConfig.PostStartupScriptBehavior

notebook_software_config.proto:33

Represents a notebook runtime post startup script behavior.

Used in: PostStartupScriptConfig

message PredefinedSplit

training_pipeline.proto:386

Assigns input data to training, validation, and test sets based on the value of a provided key. Supported only for tabular Datasets.

Used in: InputDataConfig

message PredictRequestResponseLoggingConfig

endpoint.proto:351

Configuration for logging request-response to a BigQuery table.

Used in: Endpoint

message PredictSchemata

model.proto:521

Contains the schemata used in Model's predictions and explanations via [PredictionService.Predict][google.cloud.aiplatform.v1.PredictionService.Predict], [PredictionService.Explain][google.cloud.aiplatform.v1.PredictionService.Explain] and [BatchPredictionJob][google.cloud.aiplatform.v1.BatchPredictionJob].

Used in: Model, PublisherModel, UnmanagedContainerModel

message Presets

explanation.proto:475

Preset configuration for example-based explanations

Used in: Examples

enum Presets.Modality

explanation.proto:486

Preset option controlling parameters for different modalities

Used in: Presets

enum Presets.Query

explanation.proto:477

Preset option controlling parameters for query speed-precision trade-off

Used in: Presets

message PrivateEndpoints

endpoint.proto:335

PrivateEndpoints proto is used to provide paths for users to send requests privately. To send request via private service access, use predict_http_uri, explain_http_uri or health_http_uri. To send request via private service connect, use service_attachment.

Used in: DeployedModel

message PrivateServiceConnectConfig

service_networking.proto:50

Represents configuration for private service connect.

Used in: Endpoint, FeatureOnlineStore.DedicatedServingEndpoint, IndexEndpoint

message Probe

model.proto:866

Probe describes a health check to be performed against a container to determine whether it is alive or ready to receive traffic.

Used in: ModelContainerSpec

message Probe.ExecAction

model.proto:868

ExecAction specifies a command to execute.

Used in: Probe

message Probe.GrpcAction

model.proto:900

GrpcAction checks the health of a container using a gRPC service.

Used in: Probe

message Probe.HttpGetAction

model.proto:879

HttpGetAction describes an action based on HTTP Get requests.

Used in: Probe

message Probe.HttpHeader

model.proto:925

HttpHeader describes a custom header to be used in HTTP probes

Used in: HttpGetAction

message Probe.TcpSocketAction

model.proto:914

TcpSocketAction probes the health of a container by opening a TCP socket connection.

Used in: Probe

message PscAutomatedEndpoints

service_networking.proto:67

PscAutomatedEndpoints defines the output of the forwarding rule automatically created by each PscAutomationConfig.

Used in: IndexPrivateEndpoints

message PublisherModel.CallToAction

publisher_model.proto:67

Actions could take on this Publisher Model.

Used in: PublisherModel

message PublisherModel.CallToAction.Deploy

publisher_model.proto:117

Model metadata that is needed for UploadModel or DeployModel/CreateEndpoint requests.

Used in: CallToAction

message PublisherModel.CallToAction.Deploy.DeployMetadata

publisher_model.proto:120

Metadata information about the deployment for managing deployment config.

Used in: Deploy

message PublisherModel.CallToAction.DeployGke

publisher_model.proto:181

Configurations for PublisherModel GKE deployment

Used in: CallToAction

message PublisherModel.CallToAction.OpenFineTuningPipelines

publisher_model.proto:109

Open fine tuning pipelines.

Used in: CallToAction

message PublisherModel.CallToAction.OpenNotebooks

publisher_model.proto:102

Open notebooks.

Used in: CallToAction

message PublisherModel.CallToAction.RegionalResourceReferences

publisher_model.proto:70

The regional resource name or the URI. Key is region, e.g., us-central1, europe-west2, global, etc..

Used in: CallToAction, OpenFineTuningPipelines, OpenNotebooks

message PublisherModel.CallToAction.ViewRestApi

publisher_model.proto:92

Rest API docs.

Used in: CallToAction

message PublisherModel.Documentation

publisher_model.proto:57

A named piece of documentation.

Used in: CallToAction.ViewRestApi

enum PublisherModel.LaunchStage

publisher_model.proto:262

An enum representing the launch stage of a PublisherModel.

Used in: PublisherModel

enum PublisherModel.OpenSourceCategory

publisher_model.proto:238

An enum representing the open source category of a PublisherModel.

Used in: PublisherModel

message PublisherModel.ResourceReference

publisher_model.proto:40

Reference to a resource.

Used in: CallToAction.RegionalResourceReferences

enum PublisherModel.VersionState

publisher_model.proto:287

An enum representing the state of the PublicModelVersion.

Used in: PublisherModel

enum PublisherModelView

model_garden_service.proto:49

View enumeration of PublisherModel.

Used in: GetPublisherModelRequest

message PurgeArtifactsMetadata

metadata_service.proto:702

Details of operations that perform [MetadataService.PurgeArtifacts][google.cloud.aiplatform.v1.MetadataService.PurgeArtifacts].

message PurgeArtifactsResponse

metadata_service.proto:687

Response message for [MetadataService.PurgeArtifacts][google.cloud.aiplatform.v1.MetadataService.PurgeArtifacts].

message PurgeContextsMetadata

metadata_service.proto:913

Details of operations that perform [MetadataService.PurgeContexts][google.cloud.aiplatform.v1.MetadataService.PurgeContexts].

message PurgeContextsResponse

metadata_service.proto:898

Response message for [MetadataService.PurgeContexts][google.cloud.aiplatform.v1.MetadataService.PurgeContexts].

message PurgeExecutionsMetadata

metadata_service.proto:1222

Details of operations that perform [MetadataService.PurgeExecutions][google.cloud.aiplatform.v1.MetadataService.PurgeExecutions].

message PurgeExecutionsResponse

metadata_service.proto:1206

Response message for [MetadataService.PurgeExecutions][google.cloud.aiplatform.v1.MetadataService.PurgeExecutions].

message PythonPackageSpec

custom_job.proto:333

The spec of a Python packaged code.

Used in: WorkerPoolSpec

message QuestionAnsweringCorrectnessInput

evaluation_service.proto:940

Input for question answering correctness metric.

Used in: EvaluateInstancesRequest

message QuestionAnsweringCorrectnessInstance

evaluation_service.proto:951

Spec for question answering correctness instance.

Used in: QuestionAnsweringCorrectnessInput

message QuestionAnsweringCorrectnessResult

evaluation_service.proto:976

Spec for question answering correctness result.

Used in: EvaluateInstancesResponse

message QuestionAnsweringCorrectnessSpec

evaluation_service.proto:966

Spec for question answering correctness metric.

Used in: QuestionAnsweringCorrectnessInput

message QuestionAnsweringHelpfulnessInput

evaluation_service.proto:892

Input for question answering helpfulness metric.

Used in: EvaluateInstancesRequest

message QuestionAnsweringHelpfulnessInstance

evaluation_service.proto:903

Spec for question answering helpfulness instance.

Used in: QuestionAnsweringHelpfulnessInput

message QuestionAnsweringHelpfulnessResult

evaluation_service.proto:928

Spec for question answering helpfulness result.

Used in: EvaluateInstancesResponse

message QuestionAnsweringHelpfulnessSpec

evaluation_service.proto:918

Spec for question answering helpfulness metric.

Used in: QuestionAnsweringHelpfulnessInput

message QuestionAnsweringQualityInput

evaluation_service.proto:743

Input for question answering quality metric.

Used in: EvaluateInstancesRequest

message QuestionAnsweringQualityInstance

evaluation_service.proto:754

Spec for question answering quality instance.

Used in: QuestionAnsweringQualityInput

message QuestionAnsweringQualityResult

evaluation_service.proto:779

Spec for question answering quality result.

Used in: EvaluateInstancesResponse

message QuestionAnsweringQualitySpec

evaluation_service.proto:769

Spec for question answering quality score metric.

Used in: QuestionAnsweringQualityInput

message QuestionAnsweringRelevanceInput

evaluation_service.proto:844

Input for question answering relevance metric.

Used in: EvaluateInstancesRequest

message QuestionAnsweringRelevanceInstance

evaluation_service.proto:855

Spec for question answering relevance instance.

Used in: QuestionAnsweringRelevanceInput

message QuestionAnsweringRelevanceResult

evaluation_service.proto:880

Spec for question answering relevance result.

Used in: EvaluateInstancesResponse

message QuestionAnsweringRelevanceSpec

evaluation_service.proto:870

Spec for question answering relevance metric.

Used in: QuestionAnsweringRelevanceInput

message RagChunk

vertex_rag_data.proto:289

A RagChunk includes the content of a chunk of a RagFile, and associated metadata.

Used in: Fact, GroundingChunk.RetrievedContext, RagContexts.Context

message RagChunk.PageSpan

vertex_rag_data.proto:291

Represents where the chunk starts and ends in the document.

Used in: RagChunk

message RagContexts

vertex_rag_service.proto:145

Relevant contexts for one query.

Used in: RetrieveContextsResponse

message RagContexts.Context

vertex_rag_service.proto:147

A context of the query.

Used in: RagContexts

message RagCorpus

vertex_rag_data.proto:181

A RagCorpus is a RagFile container and a project can have multiple RagCorpora.

Used as response type in: VertexRagDataService.GetRagCorpus

Used as field type in: CreateRagCorpusRequest, ListRagCorporaResponse, UpdateRagCorpusRequest

message RagEmbeddingModelConfig

vertex_rag_data.proto:34

Config for the embedding model to use for RAG.

Used in: RagVectorDbConfig

message RagEmbeddingModelConfig.VertexPredictionEndpoint

vertex_rag_data.proto:36

Config representing a model hosted on Vertex Prediction Endpoint.

Used in: RagEmbeddingModelConfig

message RagFile

vertex_rag_data.proto:229

A RagFile contains user data for chunking, embedding and indexing.

Used as response type in: VertexRagDataService.GetRagFile

Used as field type in: ListRagFilesResponse, UploadRagFileRequest, UploadRagFileResponse

message RagFileChunkingConfig

vertex_rag_data.proto:307

Specifies the size and overlap of chunks for RagFiles.

Used in: RagFileTransformationConfig

message RagFileChunkingConfig.FixedLengthChunking

vertex_rag_data.proto:309

Specifies the fixed length chunking config.

Used in: RagFileChunkingConfig

message RagFileParsingConfig

vertex_rag_data.proto:331

Specifies the parsing config for RagFiles.

Used in: ImportRagFilesConfig

message RagFileParsingConfig.LayoutParser

vertex_rag_data.proto:333

Document AI Layout Parser config.

Used in: RagFileParsingConfig

message RagFileParsingConfig.LlmParser

vertex_rag_data.proto:351

Specifies the advanced parsing for RagFiles.

Used in: RagFileParsingConfig

message RagFileTransformationConfig

vertex_rag_data.proto:325

Specifies the transformation config for RagFiles.

Used in: ImportRagFilesConfig, UploadRagFileConfig

message RagQuery

vertex_rag_service.proto:75

A query to retrieve relevant contexts.

Used in: RetrieveContextsRequest

message RagRetrievalConfig

tool.proto:360

Specifies the context retrieval config.

Used in: RagQuery, VertexRagStore

message RagRetrievalConfig.Filter

tool.proto:362

Config for filters.

Used in: RagRetrievalConfig

message RagRetrievalConfig.Ranking

tool.proto:382

Config for ranking and reranking.

Used in: RagRetrievalConfig

message RagRetrievalConfig.Ranking.LlmRanker

tool.proto:391

Config for LlmRanker.

Used in: Ranking

message RagRetrievalConfig.Ranking.RankService

tool.proto:384

Config for Rank Service.

Used in: Ranking

message RagVectorDbConfig

vertex_rag_data.proto:77

Config for the Vector DB to use for RAG.

Used in: RagCorpus

message RagVectorDbConfig.Pinecone

vertex_rag_data.proto:82

The config for the Pinecone.

Used in: RagVectorDbConfig

message RagVectorDbConfig.RagManagedDb

vertex_rag_data.proto:79

The config for the default RAG-managed Vector DB.

Used in: RagVectorDbConfig

(message has no fields)

message RagVectorDbConfig.VertexVectorSearch

vertex_rag_data.proto:89

The config for the Vertex Vector Search.

Used in: RagVectorDbConfig

message RayLogsSpec

persistent_resource.proto:303

Configuration for the Ray OSS Logs.

Used in: RaySpec

message RayMetricSpec

persistent_resource.proto:297

Configuration for the Ray metrics.

Used in: RaySpec

message RaySpec

persistent_resource.proto:226

Configuration information for the Ray cluster. For experimental launch, Ray cluster creation and Persistent cluster creation are 1:1 mapping: We will provision all the nodes within the Persistent cluster as Ray nodes.

Used in: ResourceRuntimeSpec

message ReadFeatureValuesResponse

featurestore_online_service.proto:145

Response message for [FeaturestoreOnlineServingService.ReadFeatureValues][google.cloud.aiplatform.v1.FeaturestoreOnlineServingService.ReadFeatureValues].

Used as response type in: FeaturestoreOnlineServingService.ReadFeatureValues, FeaturestoreOnlineServingService.StreamingReadFeatureValues

message ReadFeatureValuesResponse.EntityView

featurestore_online_service.proto:170

Entity view with Feature values.

Used in: ReadFeatureValuesResponse

message ReadFeatureValuesResponse.EntityView.Data

featurestore_online_service.proto:173

Container to hold value(s), successive in time, for one Feature from the request.

Used in: EntityView

message ReadFeatureValuesResponse.FeatureDescriptor

featurestore_online_service.proto:147

Metadata for requested Features.

Used in: Header

message ReadFeatureValuesResponse.Header

featurestore_online_service.proto:155

Response header with metadata for the requested [ReadFeatureValuesRequest.entity_type][google.cloud.aiplatform.v1.ReadFeatureValuesRequest.entity_type] and Features.

Used in: ReadFeatureValuesResponse

message ReadTensorboardUsageResponse.PerMonthUsageData

tensorboard_service.proto:513

Per month usage data

Used in: ReadTensorboardUsageResponse

message ReadTensorboardUsageResponse.PerUserUsageData

tensorboard_service.proto:504

Per user usage data.

Used in: PerMonthUsageData

message ReasoningEngine

reasoning_engine.proto:89

ReasoningEngine provides a customizable runtime for models to determine which actions to take and in which order.

Used as response type in: ReasoningEngineService.GetReasoningEngine

Used as field type in: CreateReasoningEngineRequest, ListReasoningEnginesResponse, UpdateReasoningEngineRequest

message ReasoningEngineSpec

reasoning_engine.proto:34

ReasoningEngine configurations

Used in: ReasoningEngine

message ReasoningEngineSpec.DeploymentSpec

reasoning_engine.proto:53

The specification of a Reasoning Engine deployment.

Used in: ReasoningEngineSpec

message ReasoningEngineSpec.PackageSpec

reasoning_engine.proto:36

User provided package spec like pickled object and package requirements.

Used in: ReasoningEngineSpec

message RebaseTunedModelOperationMetadata

genai_tuning_service.proto:215

Runtime operation information for [GenAiTuningService.RebaseTunedModel][google.cloud.aiplatform.v1.GenAiTuningService.RebaseTunedModel].

message RebootPersistentResourceOperationMetadata

persistent_resource_service.proto:161

Details of operations that perform reboot PersistentResource.

message ReservationAffinity

reservation_affinity.proto:37

A ReservationAffinity can be used to configure a Vertex AI resource (e.g., a DeployedModel) to draw its Compute Engine resources from a Shared Reservation, or exclusively from on-demand capacity.

Used in: MachineSpec

enum ReservationAffinity.Type

reservation_affinity.proto:39

Identifies a type of reservation affinity.

Used in: ReservationAffinity

message ResourcePool

persistent_resource.proto:162

Represents the spec of a group of resources of the same type, for example machine type, disk, and accelerators, in a PersistentResource.

Used in: PersistentResource

message ResourcePool.AutoscalingSpec

persistent_resource.proto:164

The min/max number of replicas allowed if enabling autoscaling

Used in: ResourcePool

message ResourceRuntime

persistent_resource.proto:262

Persistent Cluster runtime information as output

Used in: PersistentResource

message ResourceRuntimeSpec

persistent_resource.proto:212

Configuration for the runtime on a PersistentResource instance, including but not limited to: * Service accounts used to run the workloads. * Whether to make it a dedicated Ray Cluster.

Used in: PersistentResource

message ResourcesConsumed

machine_resources.proto:190

Statistics information about resource consumption.

Used in: BatchPredictionJob

message RestoreDatasetVersionOperationMetadata

dataset_service.proto:604

Runtime operation information for [DatasetService.RestoreDatasetVersion][google.cloud.aiplatform.v1.DatasetService.RestoreDatasetVersion].

message Retrieval

tool.proto:208

Defines a retrieval tool that model can call to access external knowledge.

Used in: Tool

message RetrievalConfig

tool.proto:351

Retrieval config.

Used in: ToolConfig

message RetrievalMetadata

content.proto:635

Metadata related to retrieval in the grounding flow.

Used in: GroundingMetadata

message RetrieveContextsRequest.VertexRagStore

vertex_rag_service.proto:92

The data source for Vertex RagStore.

Used in: RetrieveContextsRequest

message RetrieveContextsRequest.VertexRagStore.RagResource

vertex_rag_service.proto:94

The definition of the Rag resource.

Used in: VertexRagStore

message RougeInput

evaluation_service.proto:330

Input for rouge metric.

Used in: EvaluateInstancesRequest

message RougeInstance

evaluation_service.proto:339

Spec for rouge instance.

Used in: RougeInput

message RougeMetricValue

evaluation_service.proto:368

Rouge metric value for an instance.

Used in: RougeResults

message RougeResults

evaluation_service.proto:361

Results for rouge metric.

Used in: EvaluateInstancesResponse

message RougeSpec

evaluation_service.proto:349

Spec for rouge score metric - calculates the recall of n-grams in prediction as compared to reference - returns a score ranging between 0 and 1.

Used in: RougeInput

message SafetyInput

evaluation_service.proto:440

Input for safety metric.

Used in: EvaluateInstancesRequest

message SafetyInstance

evaluation_service.proto:449

Spec for safety instance.

Used in: SafetyInput

message SafetyRating

content.proto:330

Safety rating corresponding to the generated content.

Used in: Candidate, GenerateContentResponse.PromptFeedback

enum SafetyRating.HarmProbability

content.proto:332

Harm probability levels in the content.

Used in: SafetyRating

enum SafetyRating.HarmSeverity

content.proto:350

Harm severity levels.

Used in: SafetyRating

message SafetyResult

evaluation_service.proto:461

Spec for safety result.

Used in: EvaluateInstancesResponse

message SafetySetting

content.proto:284

Safety settings.

Used in: GenerateContentRequest

enum SafetySetting.HarmBlockMethod

content.proto:307

Probability vs severity.

Used in: SafetySetting

enum SafetySetting.HarmBlockThreshold

content.proto:286

Probability based thresholds levels for blocking.

Used in: SafetySetting

message SafetySpec

evaluation_service.proto:455

Spec for safety metric.

Used in: SafetyInput

message SampleConfig

data_labeling_job.proto:172

Active learning data sampling config. For every active learning labeling iteration, it will select a batch of data based on the sampling strategy.

Used in: ActiveLearningConfig

enum SampleConfig.SampleStrategy

data_labeling_job.proto:175

Sample strategy decides which subset of DataItems should be selected for human labeling in every batch.

Used in: SampleConfig

message SampledShapleyAttribution

explanation.proto:283

An attribution method that approximates Shapley values for features that contribute to the label being predicted. A sampling strategy is used to approximate the value rather than considering all subsets of features.

Used in: ExplanationParameters

message SamplingStrategy

model_monitoring.proto:218

Sampling Strategy for logging, can be for both training and prediction dataset.

Used in: ModelDeploymentMonitoringJob, ModelMonitoringObjectiveConfig.TrainingDataset

message SamplingStrategy.RandomSampleConfig

model_monitoring.proto:220

Requests are randomly selected.

Used in: SamplingStrategy

message SavedQuery

saved_query.proto:34

A SavedQuery is a view of the dataset. It references a subset of annotations by problem type and filters.

Used in: Dataset, ListSavedQueriesResponse

message Scalar

tensorboard_data.proto:72

One point viewable on a scalar metric plot.

Used in: TimeSeriesDataPoint

message Schedule

schedule.proto:35

An instance of a Schedule periodically schedules runs to make API calls based on user specified time specification and API request type.

Used as response type in: ScheduleService.CreateSchedule, ScheduleService.GetSchedule, ScheduleService.UpdateSchedule

Used as field type in: CreateScheduleRequest, ListSchedulesResponse, UpdateScheduleRequest

message Schedule.RunResponse

schedule.proto:42

Status of a scheduled run.

Used in: Schedule

enum Schedule.State

schedule.proto:51

Possible state of the schedule.

Used in: Schedule

message Scheduling

custom_job.proto:359

All parameters related to queuing and scheduling of custom jobs.

Used in: CustomJobSpec

enum Scheduling.Strategy

custom_job.proto:365

Optional. This determines which type of scheduling strategy to use. Right now users have two options such as STANDARD which will use regular on demand resources to schedule the job, the other is SPOT which would leverage spot resources alongwith regular resources to schedule the job.

Used in: Scheduling

message Schema

openapi.proto:59

Schema is used to define the format of input/output data. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema-object). More fields may be added in the future as needed.

Used in: FunctionDeclaration, GenerationConfig

message SearchDataItemsRequest.OrderByAnnotation

dataset_service.proto:653

Expression that allows ranking results based on annotation's property.

Used in: SearchDataItemsRequest

message SearchEntryPoint

content.proto:624

Google search entry point.

Used in: GroundingMetadata

message SearchModelDeploymentMonitoringStatsAnomaliesRequest.StatsAnomaliesObjective

job_service.proto:1149

Stats requested for specific objective.

Used in: SearchModelDeploymentMonitoringStatsAnomaliesRequest

message SecretEnvVar

env_var.proto:59

Represents an environment variable where the value is a secret in Cloud Secret Manager.

Used in: ReasoningEngineSpec.DeploymentSpec

message SecretRef

env_var.proto:46

Reference to a secret stored in the Cloud Secret Manager that will provide the value for this environment variable.

Used in: SecretEnvVar

message Segment

content.proto:526

Segment of the content.

Used in: GroundingSupport

message ServiceAccountSpec

persistent_resource.proto:274

Configuration for the use of custom service account to run the workloads.

Used in: ResourceRuntimeSpec

message SharePointSources

io.proto:208

The SharePointSources to pass to ImportRagFiles.

Used in: ImportRagFilesConfig, RagFile

message SharePointSources.SharePointSource

io.proto:210

An individual SharePointSource.

Used in: SharePointSources

message ShieldedVmConfig

machine_resources.proto:259

A set of Shielded Instance options. See [Images using supported Shielded VM features](https://cloud.google.com/compute/docs/instances/modifying-shielded-vm).

Used in: NotebookRuntime, NotebookRuntimeTemplate

message SlackSource

io.proto:144

The Slack source for the ImportRagFilesRequest.

Used in: ImportRagFilesConfig, RagFile

message SlackSource.SlackChannels

io.proto:146

SlackChannels contains the Slack channels and corresponding access token.

Used in: SlackSource

message SlackSource.SlackChannels.SlackChannel

io.proto:148

SlackChannel contains the Slack channel ID and the time range to import.

Used in: SlackChannels

message SmoothGradConfig

explanation.proto:356

Config for SmoothGrad approximation of gradients. When enabled, the gradients are approximated by averaging the gradients from noisy samples in the vicinity of the inputs. Adding noise can help improve the computed gradients. Refer to this paper for more details: https://arxiv.org/pdf/1706.03825.pdf

Used in: IntegratedGradientsAttribution, XraiAttribution

message SpecialistPool

specialist_pool.proto:36

SpecialistPool represents customers' own workforce to work on their data labeling jobs. It includes a group of specialist managers and workers. Managers are responsible for managing the workers in this pool as well as customers' data labeling jobs associated with this pool. Customers create specialist pool as well as start data labeling jobs on Cloud, managers and workers handle the jobs using CrowdCompute console.

Used as response type in: SpecialistPoolService.GetSpecialistPool

Used as field type in: CreateSpecialistPoolRequest, ListSpecialistPoolsResponse, UpdateSpecialistPoolRequest

message SpeculativeDecodingSpec

endpoint.proto:381

Configuration for Speculative Decoding.

Used in: DeployedModel

message SpeculativeDecodingSpec.DraftModelSpeculation

endpoint.proto:384

Draft model speculation works by using the smaller model to generate candidate tokens for speculative decoding.

Used in: SpeculativeDecodingSpec

message SpeculativeDecodingSpec.NgramSpeculation

endpoint.proto:397

N-Gram speculation works by trying to find matching tokens in the previous prompt sequence and use those as speculation for generating new tokens.

Used in: SpeculativeDecodingSpec

message StartNotebookRuntimeOperationMetadata

notebook_service.proto:610

Metadata information for [NotebookService.StartNotebookRuntime][google.cloud.aiplatform.v1.NotebookService.StartNotebookRuntime].

message StartNotebookRuntimeResponse

notebook_service.proto:621

Response message for [NotebookService.StartNotebookRuntime][google.cloud.aiplatform.v1.NotebookService.StartNotebookRuntime].

(message has no fields)

message StopNotebookRuntimeOperationMetadata

notebook_service.proto:640

Metadata information for [NotebookService.StopNotebookRuntime][google.cloud.aiplatform.v1.NotebookService.StopNotebookRuntime].

message StopNotebookRuntimeResponse

notebook_service.proto:647

Response message for [NotebookService.StopNotebookRuntime][google.cloud.aiplatform.v1.NotebookService.StopNotebookRuntime].

(message has no fields)

message StratifiedSplit

training_pipeline.proto:436

Assigns input data to the training, validation, and test sets so that the distribution of values found in the categorical column (as specified by the `key` field) is mirrored within each split. The fraction values determine the relative sizes of the splits. For example, if the specified column has three values, with 50% of the rows having value "A", 25% value "B", and 25% value "C", and the split fractions are specified as 80/10/10, then the training set will constitute 80% of the training data, with about 50% of the training set rows having the value "A" for the specified column, about 25% having the value "B", and about 25% having the value "C". Only the top 500 occurring values are used; any values not in the top 500 values are randomly assigned to a split. If less than three rows contain a specific value, those rows are randomly assigned. Supported only for tabular Datasets.

Used in: InputDataConfig

message StreamingPredictRequest

prediction_service.proto:479

Request message for [PredictionService.StreamingPredict][google.cloud.aiplatform.v1.PredictionService.StreamingPredict]. The first message must contain [endpoint][google.cloud.aiplatform.v1.StreamingPredictRequest.endpoint] field and optionally [input][]. The subsequent messages must contain [input][].

Used as request type in: PredictionService.ServerStreamingPredict, PredictionService.StreamingPredict

message StreamingPredictResponse

prediction_service.proto:499

Response message for [PredictionService.StreamingPredict][google.cloud.aiplatform.v1.PredictionService.StreamingPredict].

Used as response type in: PredictionService.ServerStreamingPredict, PredictionService.StreamingPredict

message StringArray

types.proto:46

A list of string values.

Used in: FeatureValue

message StructFieldValue

featurestore_online_service.proto:291

One field of a Struct (or object) type feature value.

Used in: StructValue

message StructValue

featurestore_online_service.proto:285

Struct (or object) type feature value.

Used in: FeatureValue

message Study

study.proto:35

A message representing a Study.

Used as response type in: VizierService.CreateStudy, VizierService.GetStudy, VizierService.LookupStudy

Used as field type in: CreateStudyRequest, ListStudiesResponse

enum Study.State

study.proto:42

Describes the Study state.

Used in: Study, SuggestTrialsResponse

message StudySpec

study.proto:228

Represents specification of a Study.

Used in: HyperparameterTuningJob, Study

enum StudySpec.Algorithm

study.proto:575

The available search algorithms for the Study.

Used in: StudySpec

message StudySpec.ConvexAutomatedStoppingSpec

study.proto:471

Configuration for ConvexAutomatedStoppingSpec. When there are enough completed trials (configured by min_measurement_count), for pending trials with enough measurements and steps, the policy first computes an overestimate of the objective value at max_num_steps according to the slope of the incomplete objective value curve. No prediction can be made if the curve is completely flat. If the overestimation is worse than the best objective value of the completed trials, this pending trial will be early-stopped, but a last measurement will be added to the pending trial with max_num_steps and predicted objective value from the autoregression model.

Used in: StudySpec

message StudySpec.DecayCurveAutomatedStoppingSpec

study.proto:438

The decay curve automated stopping rule builds a Gaussian Process Regressor to predict the final objective value of a Trial based on the already completed Trials and the intermediate measurements of the current Trial. Early stopping is requested for the current Trial if there is very low probability to exceed the optimal value found so far.

Used in: StudySpec

enum StudySpec.MeasurementSelectionType

study.proto:620

This indicates which measurement to use if/when the service automatically selects the final measurement from previously reported intermediate measurements. Choose this based on two considerations: A) Do you expect your measurements to monotonically improve? If so, choose LAST_MEASUREMENT. On the other hand, if you're in a situation where your system can "over-train" and you expect the performance to get better for a while but then start declining, choose BEST_MEASUREMENT. B) Are your measurements significantly noisy and/or irreproducible? If so, BEST_MEASUREMENT will tend to be over-optimistic, and it may be better to choose LAST_MEASUREMENT. If both or neither of (A) and (B) apply, it doesn't matter which selection type is chosen.

Used in: StudySpec

message StudySpec.MedianAutomatedStoppingSpec

study.proto:452

The median automated stopping rule stops a pending Trial if the Trial's best objective_value is strictly below the median 'performance' of all completed Trials reported up to the Trial's last measurement. Currently, 'performance' refers to the running average of the objective values reported by the Trial in each measurement.

Used in: StudySpec

message StudySpec.MetricSpec

study.proto:230

Represents a metric to optimize.

Used in: StudySpec

enum StudySpec.MetricSpec.GoalType

study.proto:247

The available types of optimization goals.

Used in: MetricSpec

message StudySpec.MetricSpec.SafetyMetricConfig

study.proto:232

Used in safe optimization to specify threshold levels and risk tolerance.

Used in: MetricSpec

enum StudySpec.ObservationNoise

study.proto:593

Describes the noise level of the repeated observations. "Noisy" means that the repeated observations with the same Trial parameters may lead to different metric evaluations.

Used in: StudySpec

message StudySpec.ParameterSpec

study.proto:271

Represents a single parameter to optimize.

Used in: StudySpec, ParameterSpec.ConditionalParameterSpec

message StudySpec.ParameterSpec.CategoricalValueSpec

study.proto:307

Value specification for a parameter in `CATEGORICAL` type.

Used in: ParameterSpec

message StudySpec.ParameterSpec.ConditionalParameterSpec

study.proto:339

Represents a parameter spec with condition from its parent parameter.

Used in: ParameterSpec

message StudySpec.ParameterSpec.ConditionalParameterSpec.CategoricalValueCondition

study.proto:357

Represents the spec to match categorical values from parent parameter.

Used in: ConditionalParameterSpec

message StudySpec.ParameterSpec.ConditionalParameterSpec.DiscreteValueCondition

study.proto:341

Represents the spec to match discrete values from parent parameter.

Used in: ConditionalParameterSpec

message StudySpec.ParameterSpec.ConditionalParameterSpec.IntValueCondition

study.proto:350

Represents the spec to match integer values from parent parameter.

Used in: ConditionalParameterSpec

message StudySpec.ParameterSpec.DiscreteValueSpec

study.proto:321

Value specification for a parameter in `DISCRETE` type.

Used in: ParameterSpec

message StudySpec.ParameterSpec.DoubleValueSpec

study.proto:273

Value specification for a parameter in `DOUBLE` type.

Used in: ParameterSpec

message StudySpec.ParameterSpec.IntegerValueSpec

study.proto:290

Value specification for a parameter in `INTEGER` type.

Used in: ParameterSpec

enum StudySpec.ParameterSpec.ScaleType

study.proto:385

The type of scaling that should be applied to this parameter.

Used in: ParameterSpec

message StudySpec.StudyStoppingConfig

study.proto:520

The configuration (stopping conditions) for automated stopping of a Study. Conditions include trial budgets, time budgets, and convergence detection.

Used in: StudySpec

message StudyTimeConstraint

study.proto:217

Time-based Constraint for Study

Used in: StudySpec.StudyStoppingConfig

message SuggestTrialsMetadata

vizier_service.proto:366

Details of operations that perform Trials suggestion.

message SuggestTrialsResponse

vizier_service.proto:351

Response message for [VizierService.SuggestTrials][google.cloud.aiplatform.v1.VizierService.SuggestTrials].

message SummarizationHelpfulnessInput

evaluation_service.proto:647

Input for summarization helpfulness metric.

Used in: EvaluateInstancesRequest

message SummarizationHelpfulnessInstance

evaluation_service.proto:658

Spec for summarization helpfulness instance.

Used in: SummarizationHelpfulnessInput

message SummarizationHelpfulnessResult

evaluation_service.proto:683

Spec for summarization helpfulness result.

Used in: EvaluateInstancesResponse

message SummarizationHelpfulnessSpec

evaluation_service.proto:673

Spec for summarization helpfulness score metric.

Used in: SummarizationHelpfulnessInput

message SummarizationQualityInput

evaluation_service.proto:546

Input for summarization quality metric.

Used in: EvaluateInstancesRequest

message SummarizationQualityInstance

evaluation_service.proto:557

Spec for summarization quality instance.

Used in: SummarizationQualityInput

message SummarizationQualityResult

evaluation_service.proto:582

Spec for summarization quality result.

Used in: EvaluateInstancesResponse

message SummarizationQualitySpec

evaluation_service.proto:572

Spec for summarization quality score metric.

Used in: SummarizationQualityInput

message SummarizationVerbosityInput

evaluation_service.proto:695

Input for summarization verbosity metric.

Used in: EvaluateInstancesRequest

message SummarizationVerbosityInstance

evaluation_service.proto:706

Spec for summarization verbosity instance.

Used in: SummarizationVerbosityInput

message SummarizationVerbosityResult

evaluation_service.proto:731

Spec for summarization verbosity result.

Used in: EvaluateInstancesResponse

message SummarizationVerbositySpec

evaluation_service.proto:721

Spec for summarization verbosity score metric.

Used in: SummarizationVerbosityInput

message SupervisedHyperParameters

tuning_job.proto:267

Hyperparameters for SFT.

Used in: SupervisedTuningSpec

enum SupervisedHyperParameters.AdapterSize

tuning_job.proto:269

Supported adapter sizes for tuning.

Used in: SupervisedHyperParameters

message SupervisedTuningDataStats

tuning_job.proto:212

Tuning data statistics for Supervised Tuning.

Used in: TuningDataStats

message SupervisedTuningDatasetDistribution

tuning_job.proto:168

Dataset distribution for Supervised Tuning.

Used in: SupervisedTuningDataStats

message SupervisedTuningDatasetDistribution.DatasetBucket

tuning_job.proto:171

Dataset bucket used to create a histogram for the distribution given a population of values.

Used in: SupervisedTuningDatasetDistribution

message SupervisedTuningSpec

tuning_job.proto:298

Tuning Spec for Supervised Tuning for first party models.

Used in: TuningJob

message TFRecordDestination

io.proto:92

The storage details for TFRecord output content.

Used in: FeatureValueDestination

message Tensor

types.proto:52

A tensor value type.

Used in: DirectPredictRequest, DirectPredictResponse, StreamDirectPredictRequest, StreamDirectPredictResponse, StreamingPredictRequest, StreamingPredictResponse

enum Tensor.DataType

types.proto:54

Data type of the tensor.

Used in: Tensor

message Tensorboard

tensorboard.proto:35

Tensorboard is a physical database that stores users' training metrics. A default Tensorboard is provided in each region of a Google Cloud project. If needed users can also create extra Tensorboards in their projects.

Used as response type in: TensorboardService.GetTensorboard

Used as field type in: CreateTensorboardRequest, ListTensorboardsResponse, UpdateTensorboardRequest

message TensorboardBlob

tensorboard_data.proto:96

One blob (e.g, image, graph) viewable on a blob metric plot.

Used in: ReadTensorboardBlobDataResponse, TensorboardBlobSequence

message TensorboardBlobSequence

tensorboard_data.proto:90

One point viewable on a blob metric plot, but mostly just a wrapper message to work around repeated fields can't be used directly within `oneof` fields.

Used in: TimeSeriesDataPoint

message TensorboardExperiment

tensorboard_experiment.proto:33

A TensorboardExperiment is a group of TensorboardRuns, that are typically the results of a training job run, in a Tensorboard.

Used as response type in: TensorboardService.CreateTensorboardExperiment, TensorboardService.GetTensorboardExperiment, TensorboardService.UpdateTensorboardExperiment

Used as field type in: CreateTensorboardExperimentRequest, ListTensorboardExperimentsResponse, UpdateTensorboardExperimentRequest

message TensorboardRun

tensorboard_run.proto:33

TensorboardRun maps to a specific execution of a training job with a given set of hyperparameter values, model definition, dataset, etc

Used as response type in: TensorboardService.CreateTensorboardRun, TensorboardService.GetTensorboardRun, TensorboardService.UpdateTensorboardRun

Used as field type in: BatchCreateTensorboardRunsResponse, CreateTensorboardRunRequest, ListTensorboardRunsResponse, UpdateTensorboardRunRequest

message TensorboardTensor

tensorboard_data.proto:78

One point viewable on a tensor metric plot.

Used in: TimeSeriesDataPoint

message TensorboardTimeSeries

tensorboard_time_series.proto:32

TensorboardTimeSeries maps to times series produced in training runs

Used as response type in: TensorboardService.CreateTensorboardTimeSeries, TensorboardService.GetTensorboardTimeSeries, TensorboardService.UpdateTensorboardTimeSeries

Used as field type in: BatchCreateTensorboardTimeSeriesResponse, CreateTensorboardTimeSeriesRequest, ListTensorboardTimeSeriesResponse, UpdateTensorboardTimeSeriesRequest

message TensorboardTimeSeries.Metadata

tensorboard_time_series.proto:39

Describes metadata for a TensorboardTimeSeries.

Used in: TensorboardTimeSeries

enum TensorboardTimeSeries.ValueType

tensorboard_time_series.proto:56

An enum representing the value type of a TensorboardTimeSeries.

Used in: TensorboardTimeSeries, TimeSeriesData

message ThresholdConfig

model_monitoring.proto:202

The config for feature monitoring threshold.

Used in: ModelMonitoringObjectiveConfig.PredictionDriftDetectionConfig, ModelMonitoringObjectiveConfig.TrainingPredictionSkewDetectionConfig, ModelMonitoringStatsAnomalies.FeatureHistoricStatsAnomalies

message TimeSeriesData

tensorboard_data.proto:32

All the data stored in a TensorboardTimeSeries.

Used in: BatchReadTensorboardTimeSeriesDataResponse, ReadTensorboardTimeSeriesDataResponse, WriteTensorboardRunDataRequest

message TimeSeriesDataPoint

tensorboard_data.proto:51

A TensorboardTimeSeries data point.

Used in: ExportTensorboardTimeSeriesDataResponse, TimeSeriesData

message TimestampSplit

training_pipeline.proto:401

Assigns input data to training, validation, and test sets based on a provided timestamps. The youngest data pieces are assigned to training set, next to validation set, and the oldest to the test set. Supported only for tabular Datasets.

Used in: InputDataConfig

message TokensInfo

llm_utility_service.proto:111

Tokens info with a list of tokens and the corresponding list of token ids.

Used in: ComputeTokensResponse

message Tool

tool.proto:40

Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).

Used in: CachedContent, CountTokensRequest, GenerateContentRequest

message Tool.CodeExecution

tool.proto:50

Tool that executes code generated by the model, and automatically returns the result to the model. See also [ExecutableCode]and [CodeExecutionResult] which are input and output to this tool.

Used in: Tool

(message has no fields)

message Tool.GoogleSearch

tool.proto:43

GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.

Used in: Tool

(message has no fields)

message ToolCallValidInput

evaluation_service.proto:1063

Input for tool call valid metric.

Used in: EvaluateInstancesRequest

message ToolCallValidInstance

evaluation_service.proto:1076

Spec for tool call valid instance.

Used in: ToolCallValidInput

message ToolCallValidMetricValue

evaluation_service.proto:1092

Tool call valid metric value for an instance.

Used in: ToolCallValidResults

message ToolCallValidResults

evaluation_service.proto:1085

Results for tool call valid metric.

Used in: EvaluateInstancesResponse

message ToolCallValidSpec

evaluation_service.proto:1073

Spec for tool call valid metric.

Used in: ToolCallValidInput

(message has no fields)

message ToolConfig

tool.proto:309

Tool config. This config is shared for all tools provided in the request.

Used in: CachedContent, GenerateContentRequest

message ToolNameMatchInput

evaluation_service.proto:1098

Input for tool name match metric.

Used in: EvaluateInstancesRequest

message ToolNameMatchInstance

evaluation_service.proto:1111

Spec for tool name match instance.

Used in: ToolNameMatchInput

message ToolNameMatchMetricValue

evaluation_service.proto:1127

Tool name match metric value for an instance.

Used in: ToolNameMatchResults

message ToolNameMatchResults

evaluation_service.proto:1120

Results for tool name match metric.

Used in: EvaluateInstancesResponse

message ToolNameMatchSpec

evaluation_service.proto:1108

Spec for tool name match metric.

Used in: ToolNameMatchInput

(message has no fields)

message ToolParameterKVMatchInput

evaluation_service.proto:1170

Input for tool parameter key value match metric.

Used in: EvaluateInstancesRequest

message ToolParameterKVMatchInstance

evaluation_service.proto:1187

Spec for tool parameter key value match instance.

Used in: ToolParameterKVMatchInput

message ToolParameterKVMatchMetricValue

evaluation_service.proto:1204

Tool parameter key value match metric value for an instance.

Used in: ToolParameterKVMatchResults

message ToolParameterKVMatchResults

evaluation_service.proto:1196

Results for tool parameter key value match metric.

Used in: EvaluateInstancesResponse

message ToolParameterKVMatchSpec

evaluation_service.proto:1181

Spec for tool parameter key value match metric.

Used in: ToolParameterKVMatchInput

message ToolParameterKeyMatchInput

evaluation_service.proto:1133

Input for tool parameter key match metric.

Used in: EvaluateInstancesRequest

message ToolParameterKeyMatchInstance

evaluation_service.proto:1147

Spec for tool parameter key match instance.

Used in: ToolParameterKeyMatchInput

message ToolParameterKeyMatchMetricValue

evaluation_service.proto:1164

Tool parameter key match metric value for an instance.

Used in: ToolParameterKeyMatchResults

message ToolParameterKeyMatchResults

evaluation_service.proto:1156

Results for tool parameter key match metric.

Used in: EvaluateInstancesResponse

message ToolParameterKeyMatchSpec

evaluation_service.proto:1144

Spec for tool parameter key match metric.

Used in: ToolParameterKeyMatchInput

(message has no fields)

message TrainingConfig

data_labeling_job.proto:206

CMLE training config. For every active learning labeling iteration, system will train a machine learning model on CMLE. The trained model will be used by data sampling algorithm to select DataItems.

Used in: ActiveLearningConfig

message TrainingPipeline

training_pipeline.proto:42

The TrainingPipeline orchestrates tasks associated with training a Model. It always executes the training task, and optionally may also export data from Vertex AI's Dataset which becomes the training input, [upload][google.cloud.aiplatform.v1.ModelService.UploadModel] the Model to Vertex AI, and evaluate the Model.

Used as response type in: PipelineService.CreateTrainingPipeline, PipelineService.GetTrainingPipeline

Used as field type in: CreateTrainingPipelineRequest, ListTrainingPipelinesResponse

message Trial

study.proto:82

A message representing a Trial. A Trial contains a unique set of Parameters that has been or will be evaluated, along with the objective metrics got by running the Trial.

Used as response type in: VizierService.AddTrialMeasurement, VizierService.CompleteTrial, VizierService.CreateTrial, VizierService.GetTrial, VizierService.StopTrial

Used as field type in: CreateTrialRequest, HyperparameterTuningJob, ListOptimalTrialsResponse, ListTrialsResponse, SuggestTrialsResponse

message Trial.Parameter

study.proto:89

A message representing a parameter to be tuned.

Used in: Trial, TrialContext

enum Trial.State

study.proto:104

Describes a Trial state.

Used in: Trial

message TrialContext

study.proto:199

Used in: SuggestTrialsRequest

message TunedModel

tuning_job.proto:147

The Model Registry Model and Online Prediction Endpoint assiociated with this [TuningJob][google.cloud.aiplatform.v1.TuningJob].

Used in: TuningJob

message TunedModelRef

tuning_job.proto:313

TunedModel Reference for legacy model migration.

Used in: RebaseTunedModelRequest

message TuningDataStats

tuning_job.proto:259

The tuning data statistic values for [TuningJob][google.cloud.aiplatform.v1.TuningJob].

Used in: TuningJob

message TuningJob

tuning_job.proto:36

Represents a TuningJob that runs with Google owned models.

Used as response type in: GenAiTuningService.CreateTuningJob, GenAiTuningService.GetTuningJob

Used as field type in: CreateTuningJobRequest, ListTuningJobsResponse, RebaseTunedModelRequest

enum Type

openapi.proto:32

Type contains the list of OpenAPI data types as defined by https://swagger.io/docs/specification/data-models/data-types/

Used in: Schema

message UndeployIndexOperationMetadata

index_endpoint_service.proto:323

Runtime operation information for [IndexEndpointService.UndeployIndex][google.cloud.aiplatform.v1.IndexEndpointService.UndeployIndex].

message UndeployIndexResponse

index_endpoint_service.proto:319

Response message for [IndexEndpointService.UndeployIndex][google.cloud.aiplatform.v1.IndexEndpointService.UndeployIndex].

(message has no fields)

message UndeployModelOperationMetadata

endpoint_service.proto:409

Runtime operation information for [EndpointService.UndeployModel][google.cloud.aiplatform.v1.EndpointService.UndeployModel].

message UndeployModelResponse

endpoint_service.proto:405

Response message for [EndpointService.UndeployModel][google.cloud.aiplatform.v1.EndpointService.UndeployModel].

(message has no fields)

message UnmanagedContainerModel

unmanaged_container_model.proto:32

Contains model information necessary to perform batch prediction without requiring a full model import.

Used in: BatchPredictionJob

message UpdateDeploymentResourcePoolOperationMetadata

deployment_resource_pool_service.proto:211

Runtime operation information for UpdateDeploymentResourcePool method.

message UpdateEndpointOperationMetadata

endpoint_service.proto:308

Runtime operation information for [EndpointService.UpdateEndpointLongRunning][google.cloud.aiplatform.v1.EndpointService.UpdateEndpointLongRunning].

message UpdateExplanationDatasetOperationMetadata

model_service.proto:586

Runtime operation information for [ModelService.UpdateExplanationDataset][google.cloud.aiplatform.v1.ModelService.UpdateExplanationDataset].

message UpdateExplanationDatasetResponse

model_service.proto:719

Response message of [ModelService.UpdateExplanationDataset][google.cloud.aiplatform.v1.ModelService.UpdateExplanationDataset] operation.

(message has no fields)

message UpdateFeatureGroupOperationMetadata

feature_registry_service.proto:335

Details of operations that perform update FeatureGroup.

message UpdateFeatureOnlineStoreOperationMetadata

feature_online_store_admin_service.proto:518

Details of operations that perform update FeatureOnlineStore.

message UpdateFeatureOperationMetadata

feature_registry_service.proto:347

Details of operations that perform update Feature.

message UpdateFeatureRequest

featurestore_service.proto:1244

Request message for [FeaturestoreService.UpdateFeature][google.cloud.aiplatform.v1.FeaturestoreService.UpdateFeature]. Request message for [FeatureRegistryService.UpdateFeature][google.cloud.aiplatform.v1.FeatureRegistryService.UpdateFeature].

Used as request type in: FeatureRegistryService.UpdateFeature, FeaturestoreService.UpdateFeature

message UpdateFeatureViewOperationMetadata

feature_online_store_admin_service.proto:530

Details of operations that perform update FeatureView.

message UpdateFeaturestoreOperationMetadata

featurestore_service.proto:1293

Details of operations that perform update Featurestore.

message UpdateIndexOperationMetadata

index_service.proto:217

Runtime operation information for [IndexService.UpdateIndex][google.cloud.aiplatform.v1.IndexService.UpdateIndex].

message UpdateModelDeploymentMonitoringJobOperationMetadata

job_service.proto:1369

Runtime operation information for [JobService.UpdateModelDeploymentMonitoringJob][google.cloud.aiplatform.v1.JobService.UpdateModelDeploymentMonitoringJob].

message UpdatePersistentResourceOperationMetadata

persistent_resource_service.proto:152

Details of operations that perform update PersistentResource.

message UpdateRagCorpusOperationMetadata

vertex_rag_data_service.proto:400

Runtime operation information for [VertexRagDataService.UpdateRagCorpus][google.cloud.aiplatform.v1.VertexRagDataService.UpdateRagCorpus].

message UpdateReasoningEngineOperationMetadata

reasoning_engine_service.proto:154

Details of [ReasoningEngineService.UpdateReasoningEngine][google.cloud.aiplatform.v1.ReasoningEngineService.UpdateReasoningEngine] operation.

message UpdateSpecialistPoolOperationMetadata

specialist_pool_service.proto:212

Runtime operation metadata for [SpecialistPoolService.UpdateSpecialistPool][google.cloud.aiplatform.v1.SpecialistPoolService.UpdateSpecialistPool].

message UpdateTensorboardOperationMetadata

tensorboard_service.proto:1161

Details of operations that perform update Tensorboard.

message UpgradeNotebookRuntimeOperationMetadata

notebook_service.proto:580

Metadata information for [NotebookService.UpgradeNotebookRuntime][google.cloud.aiplatform.v1.NotebookService.UpgradeNotebookRuntime].

message UpgradeNotebookRuntimeResponse

notebook_service.proto:591

Response message for [NotebookService.UpgradeNotebookRuntime][google.cloud.aiplatform.v1.NotebookService.UpgradeNotebookRuntime].

(message has no fields)

message UploadModelOperationMetadata

model_service.proto:305

Details of [ModelService.UploadModel][google.cloud.aiplatform.v1.ModelService.UploadModel] operation.

message UploadModelResponse

model_service.proto:313

Response message of [ModelService.UploadModel][google.cloud.aiplatform.v1.ModelService.UploadModel] operation.

message UploadRagFileConfig

vertex_rag_data.proto:380

Config for uploading RagFile.

Used in: UploadRagFileRequest

message UserActionReference

user_action_reference.proto:29

References an API call. It contains more information about long running operation and Jobs that are triggered by the API call.

Used in: Annotation

message Value

value.proto:28

Value is the value of the field.

Used in: PipelineJob.RuntimeConfig

message VertexAISearch

tool.proto:267

Retrieve from Vertex AI Search datastore or engine for grounding. datastore and engine are mutually exclusive. See https://cloud.google.com/products/agent-builder

Used in: Retrieval

message VertexAiSearchConfig

vertex_rag_data.proto:146

Config for the Vertex AI Search.

Used in: RagCorpus

message VertexRagStore

tool.proto:225

Retrieve from Vertex RAG Store for grounding.

Used in: AugmentPromptRequest, Retrieval

message VertexRagStore.RagResource

tool.proto:227

The definition of the Rag resource.

Used in: VertexRagStore

message VideoMetadata

content.proto:161

Metadata describes the input video content.

Used in: Part

message WorkerPoolSpec

custom_job.proto:288

Represents the spec of a worker pool in a job.

Used in: CustomJobSpec

message WriteFeatureValuesPayload

featurestore_online_service.proto:103

Contains Feature values to be written for a specific entity.

Used in: WriteFeatureValuesRequest

message WriteTensorboardRunDataRequest

tensorboard_service.proto:1081

Request message for [TensorboardService.WriteTensorboardRunData][google.cloud.aiplatform.v1.TensorboardService.WriteTensorboardRunData].

Used as request type in: TensorboardService.WriteTensorboardRunData

Used as field type in: WriteTensorboardExperimentDataRequest

message XraiAttribution

explanation.proto:325

An explanation method that redistributes Integrated Gradients attributions to segmented regions, taking advantage of the model's fully differentiable structure. Refer to this paper for more details: https://arxiv.org/abs/1906.02825 Supported only by image Models.

Used in: ExplanationParameters