package google.ai.generativelanguage.v1beta3

Mouse Melon logoGet desktop application:
View/edit binary Protocol Buffers messages

service DiscussService

discuss_service.proto:35

An API for using Generative Language Models (GLMs) in dialog applications. Also known as large language models (LLMs), this API provides models that are trained for multi-turn dialog.

service ModelService

model_service.proto:35

Provides methods for getting metadata information about Generative Models.

service PermissionService

permission_service.proto:33

Provides methods for managing permissions to PaLM API resources.

service TextService

text_service.proto:35

API for using Generative Language Models (GLMs) trained to generate text. Also known as Large Language Models (LLM)s, these generate text given an input prompt from the user.

message CitationMetadata

citation.proto:27

A collection of source attributions for a piece of content.

Used in: Message, TextCompletion

message CitationSource

citation.proto:33

A citation to a source for a portion of a specific response.

Used in: CitationMetadata

message ContentFilter

safety.proto:58

Content filtering metadata associated with processing a single request. ContentFilter contains a reason and an optional supporting string. The reason may be unspecified.

Used in: GenerateMessageResponse, GenerateTextResponse

enum ContentFilter.BlockedReason

safety.proto:60

A list of reasons why content may have been blocked.

Used in: ContentFilter

message CreateTunedModelMetadata

model_service.proto:215

Metadata about the state and progress of creating a tuned model returned from the long-running operation

message Dataset

tuned_model.proto:195

Dataset for training or validation.

Used in: TuningTask

message Embedding

text_service.proto:257

A list of floats representing the embedding.

Used in: BatchEmbedTextResponse, EmbedTextResponse

message Example

discuss_service.proto:208

An input/output example used to instruct the Model. It demonstrates how the model should respond or format its response.

Used in: MessagePrompt

enum HarmCategory

safety.proto:30

The category of a rating. These categories cover various kinds of harms that developers may wish to adjust.

Used in: SafetyRating, SafetySetting

message Hyperparameters

tuned_model.proto:178

Hyperparameters controlling the tuning process.

Used in: TuningTask

message Message

discuss_service.proto:137

The base unit of structured text. A `Message` includes an `author` and the `content` of the `Message`. The `author` is used to tag messages when they are fed to the model as text.

Used in: Example, GenerateMessageResponse, MessagePrompt

message MessagePrompt

discuss_service.proto:166

All of the structured input text passed to the model as a prompt. A `MessagePrompt` contains a structured set of fields that provide context for the conversation, examples of user input/model output message pairs that prime the model to respond in different ways, and the conversation history or list of messages representing the alternating turns of the conversation between the user and the model.

Used in: CountMessageTokensRequest, GenerateMessageRequest

message Model

model.proto:28

Information about a Generative Language Model.

Used as response type in: ModelService.GetModel

Used as field type in: ListModelsResponse

message Permission

permission.proto:40

Permission resource grants user, group or the rest of the world access to the PaLM API resource (e.g. a tuned model, file). A role is a collection of permitted operations that allows users to perform specific actions on PaLM API resources. To make them available to users, groups, or service accounts, you assign roles. When you assign a role, you grant permissions that the role contains. There are three concentric roles. Each role is a superset of the previous role's permitted operations: - reader can use the resource (e.g. tuned model) for inference - writer has reader's permissions and additionally can edit and share - owner has writer's permissions and additionally can delete

Used as response type in: PermissionService.CreatePermission, PermissionService.GetPermission, PermissionService.UpdatePermission

Used as field type in: CreatePermissionRequest, ListPermissionsResponse, UpdatePermissionRequest

enum Permission.GranteeType

permission.proto:49

Defines types of the grantee of this permission.

Used in: Permission

enum Permission.Role

permission.proto:65

Defines the role granted by this permission.

Used in: Permission

message SafetyFeedback

safety.proto:85

Safety feedback for an entire request. This field is populated if content in the input and/or response is blocked due to safety settings. SafetyFeedback may not exist for every HarmCategory. Each SafetyFeedback will return the safety settings used by the request as well as the lowest HarmProbability that should be allowed in order to return a result.

Used in: GenerateTextResponse

message SafetyRating

safety.proto:100

Safety rating for a piece of content. The safety rating contains the category of harm and the harm probability level in that category for a piece of content. Content is classified for safety across a number of harm categories and the probability of the harm classification is included here.

Used in: SafetyFeedback, TextCompletion

enum SafetyRating.HarmProbability

safety.proto:105

The probability that a piece of content is harmful. The classification system gives the probability of the content being unsafe. This does not indicate the severity of harm for a piece of content.

Used in: SafetyRating

message SafetySetting

safety.proto:133

Safety setting, affecting the safety-blocking behavior. Passing a safety setting for a category changes the allowed proability that content is blocked.

Used in: GenerateTextRequest, SafetyFeedback

enum SafetySetting.HarmBlockThreshold

safety.proto:135

Block at and beyond a specified harm probability.

Used in: SafetySetting

message TextCompletion

text_service.proto:193

Output text returned from a model.

Used in: GenerateTextResponse

message TextPrompt

text_service.proto:187

Text given to the model as a prompt. The Model will use this TextPrompt to Generate a text completion.

Used in: CountTextTokensRequest, GenerateTextRequest

message TunedModel

tuned_model.proto:29

A fine-tuned model created using ModelService.CreateTunedModel.

Used as response type in: ModelService.GetTunedModel, ModelService.UpdateTunedModel

Used as field type in: CreateTunedModelRequest, ListTunedModelsResponse, UpdateTunedModelRequest

enum TunedModel.State

tuned_model.proto:38

The state of the tuned model.

Used in: TunedModel

message TunedModelSource

tuned_model.proto:130

Tuned model as a source for training a new model.

Used in: TunedModel

message TuningExample

tuned_model.proto:211

A single example for tuning.

Used in: TuningExamples

message TuningExamples

tuned_model.proto:204

A set of tuning examples. Can be training or validatation data.

Used in: Dataset

message TuningSnapshot

tuned_model.proto:223

Record for a single tuning step.

Used in: CreateTunedModelMetadata, TuningTask

message TuningTask

tuned_model.proto:152

Tuning tasks that create tuned models.

Used in: TunedModel