package google.ai.generativelanguage.v1alpha

Mouse Melon logoGet desktop application:
View/edit binary Protocol Buffers messages

service CacheService

cache_service.proto:36

API for managing cache of content (CachedContent resources) that can be used in GenerativeService requests. This way generate content requests can benefit from preprocessing work being done earlier, possibly lowering their computational cost. It is intended to be used with large contexts.

service DiscussService

discuss_service.proto:35

An API for using Generative Language Models (GLMs) in dialog applications. Also known as large language models (LLMs), this API provides models that are trained for multi-turn dialog.

service FileService

file_service.proto:32

An API for uploading and managing files.

service GenerativeService

generative_service.proto:35

API for using Large Models that generate multimodal content and have additional capabilities beyond text generation.

service ModelService

model_service.proto:35

Provides methods for getting metadata information about Generative Models.

service PermissionService

permission_service.proto:33

Provides methods for managing permissions to PaLM API resources.

service PredictionService

prediction_service.proto:31

A service for online predictions and explanations.

service RetrieverService

retriever_service.proto:33

An API for semantic search over a corpus of user uploaded content.

service TextService

text_service.proto:35

API for using Generative Language Models (GLMs) trained to generate text. Also known as Large Language Models (LLM)s, these generate text given an input prompt from the user.

message AttributionSourceId

generative_service.proto:635

Identifier for the source contributing to this attribution.

Used in: GroundingAttribution

message AttributionSourceId.GroundingPassageId

generative_service.proto:637

Identifier for a part within a `GroundingPassage`.

Used in: AttributionSourceId

message AttributionSourceId.SemanticRetrieverChunk

generative_service.proto:649

Identifier for a `Chunk` retrieved via Semantic Retriever specified in the `GenerateAnswerRequest` using `SemanticRetrieverConfig`.

Used in: AttributionSourceId

message BidiGenerateContentClientContent

generative_service.proto:1090

Incremental update of the current conversation delivered from the client. All of the content here is unconditionally appended to the conversation history and used as part of the prompt to the model to generate content. A message here will interrupt any current model generation.

Used in: BidiGenerateContentClientMessage

message BidiGenerateContentRealtimeInput

generative_service.proto:1126

User input that is sent in real time. This is different from [BidiGenerateContentClientContent][google.ai.generativelanguage.v1alpha.BidiGenerateContentClientContent] in a few ways: - Can be sent continuously without interruption to model generation. - If there is a need to mix data interleaved across the [BidiGenerateContentClientContent][google.ai.generativelanguage.v1alpha.BidiGenerateContentClientContent] and the [BidiGenerateContentRealtimeInput][google.ai.generativelanguage.v1alpha.BidiGenerateContentRealtimeInput], the server attempts to optimize for best response, but there are no guarantees. - End of turn is not explicitly specified, but is rather derived from user activity (for example, end of speech). - Even before the end of turn, the data is processed incrementally to optimize for a fast start of the response from the model. - Is always direct user input that is sent in real time. Can be sent continuously without interruptions. The model automatically detects the beginning and the end of user speech and starts or terminates streaming the response accordingly. Data is processed incrementally as it arrives, minimizing latency.

Used in: BidiGenerateContentClientMessage

message BidiGenerateContentServerContent

generative_service.proto:1176

Incremental server update generated by the model in response to client messages. Content is generated as quickly as possible, and not in real time. Clients may choose to buffer and play it out in real time.

Used in: BidiGenerateContentServerMessage

message BidiGenerateContentSetup

generative_service.proto:1049

Message to be sent in the first and only first `BidiGenerateContentClientMessage`. Contains configuration that will apply for the duration of the streaming RPC. Clients should wait for a `BidiGenerateContentSetupComplete` message before sending any additional messages.

Used in: BidiGenerateContentClientMessage

message BidiGenerateContentSetupComplete

generative_service.proto:1169

Sent in response to a `BidiGenerateContentSetup` message from the client.

Used in: BidiGenerateContentServerMessage

(message has no fields)

message BidiGenerateContentToolCall

generative_service.proto:1199

Request for the client to execute the `function_calls` and return the responses with the matching `id`s.

Used in: BidiGenerateContentServerMessage

message BidiGenerateContentToolCallCancellation

generative_service.proto:1210

Notification for the client that a previously issued `ToolCallMessage` with the specified `id`s should have been not executed and should be cancelled. If there were side-effects to those tool calls, clients may attempt to undo the tool calls. This message occurs only in cases where the clients interrupt server turns.

Used in: BidiGenerateContentServerMessage

message BidiGenerateContentToolResponse

generative_service.proto:1139

Client generated response to a `ToolCall` received from the server. Individual `FunctionResponse` objects are matched to the respective `FunctionCall` objects by the `id` field. Note that in the unary and server-streaming GenerateContent APIs function calling happens by exchanging the `Content` parts, while in the bidi GenerateContent APIs function calling happens over these dedicated set of messages.

Used in: BidiGenerateContentClientMessage

message Blob

content.proto:109

Raw media bytes. Text should not be sent as raw bytes, use the 'text' field.

Used in: BidiGenerateContentRealtimeInput, Part

message CachedContent

cached_content.proto:34

Content that has been preprocessed and can be used in subsequent request to GenerativeService. Cached content can be only used with model it was created for.

Used as response type in: CacheService.CreateCachedContent, CacheService.GetCachedContent, CacheService.UpdateCachedContent

Used as field type in: CreateCachedContentRequest, ListCachedContentsResponse, UpdateCachedContentRequest

message CachedContent.UsageMetadata

cached_content.proto:43

Metadata on the usage of the cached content.

Used in: CachedContent

message Candidate

generative_service.proto:512

A response candidate generated from the model.

Used in: GenerateAnswerResponse, GenerateContentResponse

enum Candidate.FinishReason

generative_service.proto:514

Defines the reason why the model stopped generating tokens.

Used in: Candidate

message Chunk

retriever.proto:198

A `Chunk` is a subpart of a `Document` that is treated as an independent unit for the purposes of vector representation and storage. A `Corpus` can have a maximum of 1 million `Chunk`s.

Used as response type in: RetrieverService.CreateChunk, RetrieverService.GetChunk, RetrieverService.UpdateChunk

Used as field type in: BatchCreateChunksResponse, BatchUpdateChunksResponse, CreateChunkRequest, ListChunksResponse, RelevantChunk, UpdateChunkRequest

enum Chunk.State

retriever.proto:207

States for the lifecycle of a `Chunk`.

Used in: Chunk

message ChunkData

retriever.proto:254

Extracted data that represents the `Chunk` content.

Used in: Chunk

message CitationMetadata

citation.proto:27

A collection of source attributions for a piece of content.

Used in: Candidate, Message, TextCompletion

message CitationSource

citation.proto:33

A citation to a source for a portion of a specific response.

Used in: CitationMetadata

message CodeExecution

content.proto:253

Tool that executes code generated by the model, and automatically returns the result to the model. See also `ExecutableCode` and `CodeExecutionResult` which are only generated when using this tool.

Used in: Tool

(message has no fields)

message CodeExecutionResult

content.proto:159

Result of executing the `ExecutableCode`. Only generated when using the `CodeExecution`, and always follows a `part` containing the `ExecutableCode`.

Used in: Part

enum CodeExecutionResult.Outcome

content.proto:161

Enumeration of possible outcomes of the code execution.

Used in: CodeExecutionResult

message Condition

retriever.proto:143

Filter condition applicable to a single key.

Used in: MetadataFilter

enum Condition.Operator

retriever.proto:145

Defines the valid operators that can be applied to a key-value pair.

Used in: Condition

message Content

content.proto:57

The base structured datatype containing multi-part content of a message. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.

Used in: BidiGenerateContentClientContent, BidiGenerateContentServerContent, BidiGenerateContentSetup, CachedContent, Candidate, CountTokensRequest, EmbedContentRequest, GenerateAnswerRequest, GenerateContentRequest, GroundingAttribution, GroundingPassage, SemanticRetrieverConfig

message ContentEmbedding

generative_service.proto:957

A list of floats representing an embedding.

Used in: BatchEmbedContentsResponse, EmbedContentResponse

message ContentFilter

safety.proto:75

Content filtering metadata associated with processing a single request. ContentFilter contains a reason and an optional supporting string. The reason may be unspecified.

Used in: GenerateMessageResponse, GenerateTextResponse

enum ContentFilter.BlockedReason

safety.proto:77

A list of reasons why content may have been blocked.

Used in: ContentFilter

message Corpus

retriever.proto:30

A `Corpus` is a collection of `Document`s. A project can create up to 5 corpora.

Used as response type in: RetrieverService.CreateCorpus, RetrieverService.GetCorpus, RetrieverService.UpdateCorpus

Used as field type in: CreateCorpusRequest, ListCorporaResponse, UpdateCorpusRequest

message CreateChunkRequest

retriever_service.proto:514

Request to create a `Chunk`.

Used as request type in: RetrieverService.CreateChunk

Used as field type in: BatchCreateChunksRequest

message CreateTunedModelMetadata

model_service.proto:236

Metadata about the state and progress of creating a tuned model returned from the long-running operation

message CustomMetadata

retriever.proto:111

User provided metadata stored as key-value pairs.

Used in: Chunk, Document

message Dataset

tuned_model.proto:219

Dataset for training or validation.

Used in: TuningTask

message DeleteChunkRequest

retriever_service.proto:600

Request to delete a `Chunk`.

Used as request type in: RetrieverService.DeleteChunk

Used as field type in: BatchDeleteChunksRequest

message Document

retriever.proto:66

A `Document` is a collection of `Chunk`s. A `Corpus` can have a maximum of 10,000 `Document`s.

Used as response type in: RetrieverService.CreateDocument, RetrieverService.GetDocument, RetrieverService.UpdateDocument

Used as field type in: CreateDocumentRequest, ListDocumentsResponse, UpdateDocumentRequest

message DynamicRetrievalConfig

content.proto:230

Describes the options to customize dynamic retrieval.

Used in: GoogleSearchRetrieval

enum DynamicRetrievalConfig.Mode

content.proto:232

The mode of the predictor to be used in dynamic retrieval.

Used in: DynamicRetrievalConfig

message EmbedContentRequest

generative_service.proto:919

Request containing the `Content` for the model to embed.

Used as request type in: GenerativeService.EmbedContent

Used as field type in: BatchEmbedContentsRequest

message EmbedTextRequest

text_service.proto:217

Request to get a text embedding from the model.

Used as request type in: TextService.EmbedText

Used as field type in: BatchEmbedTextRequest

message Embedding

text_service.proto:267

A list of floats representing the embedding.

Used in: BatchEmbedTextResponse, EmbedTextResponse

message Example

discuss_service.proto:208

An input/output example used to instruct the Model. It demonstrates how the model should respond or format its response.

Used in: MessagePrompt

message ExecutableCode

content.proto:138

Code generated by the model that is meant to be executed, and the result returned to the model. Only generated when using the `CodeExecution` tool, in which the code will be automatically executed, and a corresponding `CodeExecutionResult` will also be generated.

Used in: Part

enum ExecutableCode.Language

content.proto:140

Supported programming languages for the generated code.

Used in: ExecutableCode

message File

file.proto:32

A file uploaded to the API. Next ID: 15

Used as response type in: FileService.GetFile

Used as field type in: CreateFileRequest, CreateFileResponse, ListFilesResponse

enum File.State

file.proto:41

States for the lifecycle of a File.

Used in: File

message FileData

content.proto:124

URI based data.

Used in: Part

message FunctionCall

content.proto:329

A predicted `FunctionCall` returned from the model that contains a string representing the `FunctionDeclaration.name` with the arguments and their values.

Used in: BidiGenerateContentToolCall, Part

message FunctionCallingConfig

content.proto:264

Configuration for specifying function calling behavior.

Used in: ToolConfig

enum FunctionCallingConfig.Mode

content.proto:267

Defines the execution behavior for function calling by defining the execution mode.

Used in: FunctionCallingConfig

message FunctionDeclaration

content.proto:305

Structured representation of a function declaration as defined by the [OpenAPI 3.03 specification](https://spec.openapis.org/oas/v3.0.3). Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a `Tool` by the model and executed by the client.

Used in: Tool

message FunctionResponse

content.proto:349

The result output from a `FunctionCall` that contains a string representing the `FunctionDeclaration.name` and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a`FunctionCall` made based on model prediction.

Used in: BidiGenerateContentToolResponse, Part

enum GenerateAnswerRequest.AnswerStyle

generative_service.proto:774

Style for grounded answers.

Used in: GenerateAnswerRequest

message GenerateAnswerResponse.InputFeedback

generative_service.proto:856

Feedback related to the input data used to answer the question, as opposed to the model-generated response to the question.

Used in: GenerateAnswerResponse

enum GenerateAnswerResponse.InputFeedback.BlockReason

generative_service.proto:858

Specifies what was the reason why input was blocked.

Used in: InputFeedback

message GenerateContentRequest

generative_service.proto:153

Request to generate a completion from the model.

Used as request type in: GenerativeService.GenerateContent, GenerativeService.StreamGenerateContent

Used as field type in: CountTokensRequest

message GenerateContentResponse

generative_service.proto:444

Response from the model supporting multiple candidate responses. Safety ratings and content filtering are reported for both prompt in `GenerateContentResponse.prompt_feedback` and for each candidate in `finish_reason` and in `safety_ratings`. The API: - Returns either all requested candidates or none of them - Returns no candidates at all only if there was something wrong with the prompt (check `prompt_feedback`) - Reports feedback on each candidate in `finish_reason` and `safety_ratings`.

Used as response type in: GenerativeService.GenerateContent, GenerativeService.StreamGenerateContent

message GenerateContentResponse.PromptFeedback

generative_service.proto:447

A set of the feedback metadata the prompt specified in `GenerateContentRequest.content`.

Used in: GenerateContentResponse

enum GenerateContentResponse.PromptFeedback.BlockReason

generative_service.proto:449

Specifies the reason why the prompt was blocked.

Used in: PromptFeedback

message GenerateContentResponse.UsageMetadata

generative_service.proto:481

Metadata on the generation request's token usage.

Used in: GenerateContentResponse

message GenerationConfig

generative_service.proto:254

Configuration options for model generation and outputs. Not all parameters are configurable for every model.

Used in: BidiGenerateContentSetup, GenerateContentRequest

enum GenerationConfig.Modality

generative_service.proto:256

Supported modalities of the response.

Used in: GenerationConfig

message GoogleSearchRetrieval

content.proto:224

Tool to retrieve public web data for grounding, powered by Google.

Used in: Tool

message GroundingAttribution

generative_service.proto:670

Attribution for a source that contributed to an answer.

Used in: Candidate

message GroundingChunk

generative_service.proto:720

Grounding chunk.

Used in: GroundingMetadata

message GroundingChunk.Web

generative_service.proto:722

Chunk from the web.

Used in: GroundingChunk

message GroundingMetadata

generative_service.proto:690

Metadata returned to client when grounding is enabled.

Used in: BidiGenerateContentServerContent, Candidate

message GroundingPassage

content.proto:407

Passage included inline with a grounding configuration.

Used in: GroundingPassages

message GroundingPassages

content.proto:417

A repeated list of passages.

Used in: GenerateAnswerRequest

message GroundingSupport

generative_service.proto:755

Grounding support.

Used in: GroundingMetadata

enum HarmCategory

safety.proto:30

The category of a rating. These categories cover various kinds of harms that developers may wish to adjust.

Used in: SafetyRating, SafetySetting

message Hyperparameters

tuned_model.proto:186

Hyperparameters controlling the tuning process. Read more at https://ai.google.dev/docs/model_tuning_guidance

Used in: TuningTask

message LogprobsResult

generative_service.proto:607

Logprobs Result

Used in: Candidate

message LogprobsResult.Candidate

generative_service.proto:609

Candidate for the logprobs token and score.

Used in: LogprobsResult, TopCandidates

message LogprobsResult.TopCandidates

generative_service.proto:621

Candidates with top log probabilities at each decoding step.

Used in: LogprobsResult

message Message

discuss_service.proto:137

The base unit of structured text. A `Message` includes an `author` and the `content` of the `Message`. The `author` is used to tag messages when they are fed to the model as text.

Used in: Example, GenerateMessageResponse, MessagePrompt

message MessagePrompt

discuss_service.proto:166

All of the structured input text passed to the model as a prompt. A `MessagePrompt` contains a structured set of fields that provide context for the conversation, examples of user input/model output message pairs that prime the model to respond in different ways, and the conversation history or list of messages representing the alternating turns of the conversation between the user and the model.

Used in: CountMessageTokensRequest, GenerateMessageRequest

message MetadataFilter

retriever.proto:133

User provided filter to limit retrieval based on `Chunk` or `Document` level metadata values. Example (genre = drama OR genre = action): key = "document.custom_metadata.genre" conditions = [{string_value = "drama", operation = EQUAL}, {string_value = "action", operation = EQUAL}]

Used in: QueryCorpusRequest, QueryDocumentRequest, SemanticRetrieverConfig

message Model

model.proto:28

Information about a Generative Language Model.

Used as response type in: ModelService.GetModel

Used as field type in: ListModelsResponse

message Part

content.proto:76

A datatype containing media that is part of a multi-part `Content` message. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`. A `Part` must have a fixed IANA MIME type identifying the type and subtype of the media if the `inline_data` field is filled with raw bytes.

Used in: Content

message Permission

permission.proto:41

Permission resource grants user, group or the rest of the world access to the PaLM API resource (e.g. a tuned model, corpus). A role is a collection of permitted operations that allows users to perform specific actions on PaLM API resources. To make them available to users, groups, or service accounts, you assign roles. When you assign a role, you grant permissions that the role contains. There are three concentric roles. Each role is a superset of the previous role's permitted operations: - reader can use the resource (e.g. tuned model, corpus) for inference - writer has reader's permissions and additionally can edit and share - owner has writer's permissions and additionally can delete

Used as response type in: PermissionService.CreatePermission, PermissionService.GetPermission, PermissionService.UpdatePermission

Used as field type in: CreatePermissionRequest, ListPermissionsResponse, UpdatePermissionRequest

enum Permission.GranteeType

permission.proto:51

Defines types of the grantee of this permission.

Used in: Permission

enum Permission.Role

permission.proto:67

Defines the role granted by this permission.

Used in: Permission

message PrebuiltVoiceConfig

generative_service.proto:232

The configuration for the prebuilt speaker to use.

Used in: VoiceConfig

message RelevantChunk

retriever_service.proto:346

The information for a chunk relevant to a query.

Used in: QueryCorpusResponse, QueryDocumentResponse

message RetrievalMetadata

generative_service.proto:679

Metadata related to retrieval in the grounding flow.

Used in: GroundingMetadata

message SafetyFeedback

safety.proto:102

Safety feedback for an entire request. This field is populated if content in the input and/or response is blocked due to safety settings. SafetyFeedback may not exist for every HarmCategory. Each SafetyFeedback will return the safety settings used by the request as well as the lowest HarmProbability that should be allowed in order to return a result.

Used in: GenerateTextResponse

message SafetyRating

safety.proto:117

Safety rating for a piece of content. The safety rating contains the category of harm and the harm probability level in that category for a piece of content. Content is classified for safety across a number of harm categories and the probability of the harm classification is included here.

Used in: Candidate, GenerateAnswerResponse.InputFeedback, GenerateContentResponse.PromptFeedback, SafetyFeedback, TextCompletion

enum SafetyRating.HarmProbability

safety.proto:122

The probability that a piece of content is harmful. The classification system gives the probability of the content being unsafe. This does not indicate the severity of harm for a piece of content.

Used in: SafetyRating

message SafetySetting

safety.proto:153

Safety setting, affecting the safety-blocking behavior. Passing a safety setting for a category changes the allowed probability that content is blocked.

Used in: GenerateAnswerRequest, GenerateContentRequest, GenerateTextRequest, SafetyFeedback

enum SafetySetting.HarmBlockThreshold

safety.proto:155

Block at and beyond a specified harm probability.

Used in: SafetySetting

message Schema

content.proto:367

The `Schema` object allows the definition of input and output data types. These types can be objects, but also primitives and arrays. Represents a select subset of an [OpenAPI 3.0 schema object](https://spec.openapis.org/oas/v3.0.3#schema).

Used in: FunctionDeclaration, GenerationConfig

message SearchEntryPoint

generative_service.proto:709

Google search entry point.

Used in: GroundingMetadata

message Segment

generative_service.proto:738

Segment of the content.

Used in: GroundingSupport

message SemanticRetrieverConfig

generative_service.proto:412

Configuration for retrieving grounding content from a `Corpus` or `Document` created using the Semantic Retriever API.

Used in: GenerateAnswerRequest

message SpeechConfig

generative_service.proto:247

The speech generation config.

Used in: GenerationConfig

message StringList

retriever.proto:105

User provided string values assigned to a single metadata key.

Used in: CustomMetadata

enum TaskType

generative_service.proto:126

Type of task for which the embedding will be used.

Used in: EmbedContentRequest

message TextCompletion

text_service.proto:198

Output text returned from a model.

Used in: GenerateTextResponse

message TextPrompt

text_service.proto:192

Text given to the model as a prompt. The Model will use this TextPrompt to Generate a text completion.

Used in: CountTextTokensRequest, GenerateTextRequest

message Tool

content.proto:190

Tool details that the model may use to generate response. A `Tool` is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.

Used in: BidiGenerateContentSetup, CachedContent, GenerateContentRequest

message Tool.GoogleSearch

content.proto:193

GoogleSearch tool type. Tool to support Google Search in Model. Powered by Google.

Used in: Tool

(message has no fields)

message ToolConfig

content.proto:257

The Tool configuration containing parameters for specifying `Tool` use in the request.

Used in: CachedContent, GenerateContentRequest

message TunedModel

tuned_model.proto:29

A fine-tuned model created using ModelService.CreateTunedModel.

Used as response type in: ModelService.GetTunedModel, ModelService.UpdateTunedModel

Used as field type in: CreateTunedModelRequest, ListTunedModelsResponse, UpdateTunedModelRequest

enum TunedModel.State

tuned_model.proto:38

The state of the tuned model.

Used in: TunedModel

message TunedModelSource

tuned_model.proto:137

Tuned model as a source for training a new model.

Used in: TunedModel

message TuningContent

tuned_model.proto:259

The structured datatype containing multi-part content of an example message. This is a subset of the Content proto used during model inference with limited type support. A `Content` includes a `role` field designating the producer of the `Content` and a `parts` field containing multi-part data that contains the content of the message turn.

Used in: TuningMultiturnExample

message TuningExample

tuned_model.proto:283

A single example for tuning.

Used in: TuningExamples

message TuningExamples

tuned_model.proto:228

A set of tuning examples. Can be training or validation data.

Used in: Dataset

message TuningMultiturnExample

tuned_model.proto:272

A tuning example with multiturn input.

Used in: TuningExamples

message TuningPart

tuned_model.proto:245

A datatype containing data that is part of a multi-part `TuningContent` message. This is a subset of the Part used for model inference, with limited type support. A `Part` consists of data which has an associated datatype. A `Part` can only contain one of the accepted types in `Part.data`.

Used in: TuningContent

message TuningSnapshot

tuned_model.proto:295

Record for a single tuning step.

Used in: CreateTunedModelMetadata, TuningTask

message TuningTask

tuned_model.proto:159

Tuning tasks that create tuned models.

Used in: TunedModel

enum Type

content.proto:29

Type contains the list of OpenAPI data types as defined by https://spec.openapis.org/oas/v3.0.3#data-types

Used in: Schema

message UpdateChunkRequest

retriever_service.proto:565

Request to update a `Chunk`.

Used as request type in: RetrieverService.UpdateChunk

Used as field type in: BatchUpdateChunksRequest

message VideoMetadata

file.proto:110

Metadata for a video `File`.

Used in: File

message VoiceConfig

generative_service.proto:238

The configuration for the voice to use.

Used in: SpeechConfig