Get desktop application:
View/edit binary Protocol Buffers messages
Agents are best described as Natural Language Understanding (NLU) modules that transform user requests into actionable data. You can include agents in your app, product, or service to determine user intent and respond to the user in a natural way. After you create an agent, you can add [Intents][google.cloud.dialogflow.v2beta1.Intents], [Contexts][google.cloud.dialogflow.v2beta1.Contexts], [Entity Types][google.cloud.dialogflow.v2beta1.EntityTypes], [Webhooks][google.cloud.dialogflow.v2beta1.WebhookRequest], and so on to manage the flow of a conversation and match user input to predefined intents and actions. You can create an agent using both Dialogflow Standard Edition and Dialogflow Enterprise Edition. For details, see [Dialogflow Editions](https://cloud.google.com/dialogflow/docs/editions). You can save your agent for backup or versioning by exporting the agent by using the [ExportAgent][google.cloud.dialogflow.v2beta1.Agents.ExportAgent] method. You can import a saved agent by using the [ImportAgent][google.cloud.dialogflow.v2beta1.Agents.ImportAgent] method. Dialogflow provides several [prebuilt agents](https://cloud.google.com/dialogflow/docs/agents-prebuilt) for common conversation scenarios such as determining a date and time, converting currency, and so on. For more information about agents, see the [Dialogflow documentation](https://cloud.google.com/dialogflow/docs/agents-overview).
Retrieves the specified agent.
The request message for [Agents.GetAgent][google.cloud.dialogflow.v2beta1.Agents.GetAgent].
Required. The project that the agent to fetch is associated with. Format: `projects/<Project ID>`.
Creates/updates the specified agent.
The request message for [Agents.SetAgent][google.cloud.dialogflow.v2beta1.Agents.SetAgent].
Required. The agent to update.
Optional. The mask to control which fields get updated.
Deletes the specified agent.
The request message for [Agents.DeleteAgent][google.cloud.dialogflow.v2beta1.Agents.DeleteAgent].
Required. The project that the agent to delete is associated with. Format: `projects/<Project ID>`.
Returns the list of agents. Since there is at most one conversational agent per project, this method is useful primarily for listing all agents across projects the caller has access to. One can achieve that with a wildcard project collection id "-". Refer to [List Sub-Collections](https://cloud.google.com/apis/design/design_patterns#list_sub-collections).
The request message for [Agents.SearchAgents][google.cloud.dialogflow.v2beta1.Agents.SearchAgents].
Required. The project to list agents from. Format: `projects/<Project ID or '-'>`.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
Optional. The next_page_token value returned from a previous list request.
The response message for [Agents.SearchAgents][google.cloud.dialogflow.v2beta1.Agents.SearchAgents].
The list of agents. There will be a maximum number of items returned based on the page_size field in the request.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Trains the specified agent. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [Agents.TrainAgent][google.cloud.dialogflow.v2beta1.Agents.TrainAgent].
Required. The project that the agent to train is associated with. Format: `projects/<Project ID>`.
Exports the specified agent to a ZIP file. Operation <response: [ExportAgentResponse][google.cloud.dialogflow.v2beta1.ExportAgentResponse]>
The request message for [Agents.ExportAgent][google.cloud.dialogflow.v2beta1.Agents.ExportAgent].
Required. The project that the agent to export is associated with. Format: `projects/<Project ID>`.
Optional. The [Google Cloud Storage](https://cloud.google.com/storage/docs/) URI to export the agent to. The format of this URI must be `gs://<bucket-name>/<object-name>`. If left unspecified, the serialized agent is returned inline.
Imports the specified agent from a ZIP file. Uploads new intents and entity types without deleting the existing ones. Intents and entity types with the same name are replaced with the new versions from ImportAgentRequest. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [Agents.ImportAgent][google.cloud.dialogflow.v2beta1.Agents.ImportAgent].
Required. The project that the agent to import is associated with. Format: `projects/<Project ID>`.
Required. The agent to import.
The URI to a Google Cloud Storage file containing the agent to import. Note: The URI must start with "gs://".
Zip compressed raw byte content for agent.
Restores the specified agent from a ZIP file. Replaces the current agent version with a new one. All the intents and entity types in the older version are deleted. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [Agents.RestoreAgent][google.cloud.dialogflow.v2beta1.Agents.RestoreAgent].
Required. The project that the agent to restore is associated with. Format: `projects/<Project ID>`.
Required. The agent to restore.
The URI to a Google Cloud Storage file containing the agent to restore. Note: The URI must start with "gs://".
Zip compressed raw byte content for agent.
Gets agent validation result. Agent validation is performed during training time and is updated automatically when training is completed.
The request message for [Agents.GetValidationResult][google.cloud.dialogflow.v2beta1.Agents.GetValidationResult].
Required. The project that the agent is associated with. Format: `projects/<Project ID>`.
Optional. The language for which you want a validation result. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Represents the output of agent validation.
Contains all validation errors.
A context represents additional information included with user input or with an intent returned by the Dialogflow API. Contexts are helpful for differentiating user input which may be vague or have a different meaning depending on additional details from your application such as user setting and preferences, previous user input, where the user is in your application, geographic location, and so on. You can include contexts as input parameters of a [DetectIntent][google.cloud.dialogflow.v2beta1.Sessions.DetectIntent] (or [StreamingDetectIntent][google.cloud.dialogflow.v2beta1.Sessions.StreamingDetectIntent]) request, or as output contexts included in the returned intent. Contexts expire when an intent is matched, after the number of `DetectIntent` requests specified by the `lifespan_count` parameter, or after 20 minutes if no intents are matched for a `DetectIntent` request. For more information about contexts, see the [Dialogflow documentation](https://cloud.google.com/dialogflow/docs/contexts-overview).
Returns the list of all contexts in the specified session.
The request message for [Contexts.ListContexts][google.cloud.dialogflow.v2beta1.Contexts.ListContexts].
Required. The session to list all contexts from. Format: `projects/<Project ID>/agent/sessions/<Session ID>` or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
Optional. The next_page_token value returned from a previous list request.
The response message for [Contexts.ListContexts][google.cloud.dialogflow.v2beta1.Contexts.ListContexts].
The list of contexts. There will be a maximum number of items returned based on the page_size field in the request.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Retrieves the specified context.
The request message for [Contexts.GetContext][google.cloud.dialogflow.v2beta1.Contexts.GetContext].
Required. The name of the context. Format: `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>` or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user.
Creates a context. If the specified context already exists, overrides the context.
The request message for [Contexts.CreateContext][google.cloud.dialogflow.v2beta1.Contexts.CreateContext].
Required. The session to create a context for. Format: `projects/<Project ID>/agent/sessions/<Session ID>` or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user.
Required. The context to create.
Updates the specified context.
The request message for [Contexts.UpdateContext][google.cloud.dialogflow.v2beta1.Contexts.UpdateContext].
Required. The context to update.
Optional. The mask to control which fields get updated.
Deletes the specified context.
The request message for [Contexts.DeleteContext][google.cloud.dialogflow.v2beta1.Contexts.DeleteContext].
Required. The name of the context to delete. Format: `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>` or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user.
Deletes all active contexts in the specified session.
The request message for [Contexts.DeleteAllContexts][google.cloud.dialogflow.v2beta1.Contexts.DeleteAllContexts].
Required. The name of the session to delete all contexts from. Format: `projects/<Project ID>/agent/sessions/<Session ID>` or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>`. If `Environment ID` is not specified we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user.
Manages documents of a knowledge base.
Returns the list of all documents of the knowledge base. Note: The `projects.agent.knowledgeBases.documents` resource is deprecated; only use `projects.knowledgeBases.documents`.
Request message for [Documents.ListDocuments][google.cloud.dialogflow.v2beta1.Documents.ListDocuments].
Required. The knowledge base to list all documents for. Format: `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>`.
Optional. The maximum number of items to return in a single page. By default 10 and at most 100.
Optional. The next_page_token value returned from a previous list request.
Response message for [Documents.ListDocuments][google.cloud.dialogflow.v2beta1.Documents.ListDocuments].
The list of documents.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Retrieves the specified document. Note: The `projects.agent.knowledgeBases.documents` resource is deprecated; only use `projects.knowledgeBases.documents`.
Request message for [Documents.GetDocument][google.cloud.dialogflow.v2beta1.Documents.GetDocument].
Required. The name of the document to retrieve. Format `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>`.
Creates a new document. Note: The `projects.agent.knowledgeBases.documents` resource is deprecated; only use `projects.knowledgeBases.documents`. Operation <response: [Document][google.cloud.dialogflow.v2beta1.Document], metadata: [KnowledgeOperationMetadata][google.cloud.dialogflow.v2beta1.KnowledgeOperationMetadata]>
Request message for [Documents.CreateDocument][google.cloud.dialogflow.v2beta1.Documents.CreateDocument].
Required. The knoweldge base to create a document for. Format: `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>`.
Required. The document to create.
Deletes the specified document. Note: The `projects.agent.knowledgeBases.documents` resource is deprecated; only use `projects.knowledgeBases.documents`. Operation <response: [google.protobuf.Empty][google.protobuf.Empty], metadata: [KnowledgeOperationMetadata][google.cloud.dialogflow.v2beta1.KnowledgeOperationMetadata]>
Request message for [Documents.DeleteDocument][google.cloud.dialogflow.v2beta1.Documents.DeleteDocument].
The name of the document to delete. Format: `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>`.
Updates the specified document. Note: The `projects.agent.knowledgeBases.documents` resource is deprecated; only use `projects.knowledgeBases.documents`. Operation <response: [Document][google.cloud.dialogflow.v2beta1.Document], metadata: [KnowledgeOperationMetadata][google.cloud.dialogflow.v2beta1.KnowledgeOperationMetadata]>
Request message for [Documents.UpdateDocument][google.cloud.dialogflow.v2beta1.Documents.UpdateDocument].
Required. The document to update.
Optional. Not specified means `update all`. Currently, only `display_name` can be updated, an InvalidArgument will be returned for attempting to update other fields.
Reloads the specified document from its specified source, content_uri or content. The previously loaded content of the document will be deleted. Note: Even when the content of the document has not changed, there still may be side effects because of internal implementation changes. Note: The `projects.agent.knowledgeBases.documents` resource is deprecated; only use `projects.knowledgeBases.documents`. Operation <response: [Document][google.cloud.dialogflow.v2beta1.Document], metadata: [KnowledgeOperationMetadata][google.cloud.dialogflow.v2beta1.KnowledgeOperationMetadata]>
Request message for [Documents.ReloadDocument][google.cloud.dialogflow.v2beta1.Documents.ReloadDocument].
The name of the document to reload. Format: `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>`
The source for document reloading. Optional. If provided, the service will load the contents from the source and update document in the knowledge base.
The path of gcs source file for reloading document content.
Entities are extracted from user input and represent parameters that are meaningful to your application. For example, a date range, a proper name such as a geographic location or landmark, and so on. Entities represent actionable data for your application. When you define an entity, you can also include synonyms that all map to that entity. For example, "soft drink", "soda", "pop", and so on. There are three types of entities: * **System** - entities that are defined by the Dialogflow API for common data types such as date, time, currency, and so on. A system entity is represented by the `EntityType` type. * **Developer** - entities that are defined by you that represent actionable data that is meaningful to your application. For example, you could define a `pizza.sauce` entity for red or white pizza sauce, a `pizza.cheese` entity for the different types of cheese on a pizza, a `pizza.topping` entity for different toppings, and so on. A developer entity is represented by the `EntityType` type. * **User** - entities that are built for an individual user such as favorites, preferences, playlists, and so on. A user entity is represented by the [SessionEntityType][google.cloud.dialogflow.v2beta1.SessionEntityType] type. For more information about entity types, see the [Dialogflow documentation](https://cloud.google.com/dialogflow/docs/entities-overview).
Returns the list of all entity types in the specified agent.
The request message for [EntityTypes.ListEntityTypes][google.cloud.dialogflow.v2beta1.EntityTypes.ListEntityTypes].
Required. The agent to list all entity types from. Format: `projects/<Project ID>/agent`.
Optional. The language to list entity synonyms for. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
Optional. The next_page_token value returned from a previous list request.
The response message for [EntityTypes.ListEntityTypes][google.cloud.dialogflow.v2beta1.EntityTypes.ListEntityTypes].
The list of agent entity types. There will be a maximum number of items returned based on the page_size field in the request.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Retrieves the specified entity type.
The request message for [EntityTypes.GetEntityType][google.cloud.dialogflow.v2beta1.EntityTypes.GetEntityType].
Required. The name of the entity type. Format: `projects/<Project ID>/agent/entityTypes/<EntityType ID>`.
Optional. The language to retrieve entity synonyms for. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Creates an entity type in the specified agent.
The request message for [EntityTypes.CreateEntityType][google.cloud.dialogflow.v2beta1.EntityTypes.CreateEntityType].
Required. The agent to create a entity type for. Format: `projects/<Project ID>/agent`.
Required. The entity type to create.
Optional. The language of entity synonyms defined in `entity_type`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Updates the specified entity type.
The request message for [EntityTypes.UpdateEntityType][google.cloud.dialogflow.v2beta1.EntityTypes.UpdateEntityType].
Required. The entity type to update.
Optional. The language of entity synonyms defined in `entity_type`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The mask to control which fields get updated.
Deletes the specified entity type.
The request message for [EntityTypes.DeleteEntityType][google.cloud.dialogflow.v2beta1.EntityTypes.DeleteEntityType].
Required. The name of the entity type to delete. Format: `projects/<Project ID>/agent/entityTypes/<EntityType ID>`.
Updates/Creates multiple entity types in the specified agent. Operation <response: [BatchUpdateEntityTypesResponse][google.cloud.dialogflow.v2beta1.BatchUpdateEntityTypesResponse]>
The request message for [EntityTypes.BatchUpdateEntityTypes][google.cloud.dialogflow.v2beta1.EntityTypes.BatchUpdateEntityTypes].
Required. The name of the agent to update or create entity types in. Format: `projects/<Project ID>/agent`.
Required. The source of the entity type batch. For each entity type in the batch: * If `name` is specified, we update an existing entity type. * If `name` is not specified, we create a new entity type.
The URI to a Google Cloud Storage file containing entity types to update or create. The file format can either be a serialized proto (of EntityBatch type) or a JSON object. Note: The URI must start with "gs://".
The collection of entity types to update or create.
Optional. The language of entity synonyms defined in `entity_types`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The mask to control which fields get updated.
Deletes entity types in the specified agent. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [EntityTypes.BatchDeleteEntityTypes][google.cloud.dialogflow.v2beta1.EntityTypes.BatchDeleteEntityTypes].
Required. The name of the agent to delete all entities types for. Format: `projects/<Project ID>/agent`.
Required. The names entity types to delete. All names must point to the same agent as `parent`.
Creates multiple new entities in the specified entity type. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [EntityTypes.BatchCreateEntities][google.cloud.dialogflow.v2beta1.EntityTypes.BatchCreateEntities].
Required. The name of the entity type to create entities in. Format: `projects/<Project ID>/agent/entityTypes/<Entity Type ID>`.
Required. The entities to create.
Optional. The language of entity synonyms defined in `entities`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Updates or creates multiple entities in the specified entity type. This method does not affect entities in the entity type that aren't explicitly specified in the request. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [EntityTypes.BatchUpdateEntities][google.cloud.dialogflow.v2beta1.EntityTypes.BatchUpdateEntities].
Required. The name of the entity type to update or create entities in. Format: `projects/<Project ID>/agent/entityTypes/<Entity Type ID>`.
Required. The entities to update or create.
Optional. The language of entity synonyms defined in `entities`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The mask to control which fields get updated.
Deletes entities in the specified entity type. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [EntityTypes.BatchDeleteEntities][google.cloud.dialogflow.v2beta1.EntityTypes.BatchDeleteEntities].
Required. The name of the entity type to delete entries for. Format: `projects/<Project ID>/agent/entityTypes/<Entity Type ID>`.
Required. The canonical `values` of the entities to delete. Note that these are not fully-qualified names, i.e. they don't start with `projects/<Project ID>`.
Optional. The language of entity synonyms defined in `entities`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
An intent represents a mapping between input from a user and an action to be taken by your application. When you pass user input to the [DetectIntent][google.cloud.dialogflow.v2beta1.Sessions.DetectIntent] (or [StreamingDetectIntent][google.cloud.dialogflow.v2beta1.Sessions.StreamingDetectIntent]) method, the Dialogflow API analyzes the input and searches for a matching intent. If no match is found, the Dialogflow API returns a fallback intent (`is_fallback` = true). You can provide additional information for the Dialogflow API to use to match user input to an intent by adding the following to your intent. * **Contexts** - provide additional context for intent analysis. For example, if an intent is related to an object in your application that plays music, you can provide a context to determine when to match the intent if the user input is "turn it off". You can include a context that matches the intent when there is previous user input of "play music", and not when there is previous user input of "turn on the light". * **Events** - allow for matching an intent by using an event name instead of user input. Your application can provide an event name and related parameters to the Dialogflow API to match an intent. For example, when your application starts, you can send a welcome event with a user name parameter to the Dialogflow API to match an intent with a personalized welcome message for the user. * **Training phrases** - provide examples of user input to train the Dialogflow API agent to better match intents. For more information about intents, see the [Dialogflow documentation](https://cloud.google.com/dialogflow/docs/intents-overview).
Returns the list of all intents in the specified agent.
The request message for [Intents.ListIntents][google.cloud.dialogflow.v2beta1.Intents.ListIntents].
Required. The agent to list all intents from. Format: `projects/<Project ID>/agent`.
Optional. The language to list training phrases, parameters and rich messages for. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The resource view to apply to the returned intent.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
Optional. The next_page_token value returned from a previous list request.
The response message for [Intents.ListIntents][google.cloud.dialogflow.v2beta1.Intents.ListIntents].
The list of agent intents. There will be a maximum number of items returned based on the page_size field in the request.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Retrieves the specified intent.
The request message for [Intents.GetIntent][google.cloud.dialogflow.v2beta1.Intents.GetIntent].
Required. The name of the intent. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Optional. The language to retrieve training phrases, parameters and rich messages for. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The resource view to apply to the returned intent.
Creates an intent in the specified agent.
The request message for [Intents.CreateIntent][google.cloud.dialogflow.v2beta1.Intents.CreateIntent].
Required. The agent to create a intent for. Format: `projects/<Project ID>/agent`.
Required. The intent to create.
Optional. The language of training phrases, parameters and rich messages defined in `intent`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The resource view to apply to the returned intent.
Updates the specified intent.
The request message for [Intents.UpdateIntent][google.cloud.dialogflow.v2beta1.Intents.UpdateIntent].
Required. The intent to update.
Optional. The language of training phrases, parameters and rich messages defined in `intent`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The mask to control which fields get updated.
Optional. The resource view to apply to the returned intent.
Deletes the specified intent and its direct or indirect followup intents.
The request message for [Intents.DeleteIntent][google.cloud.dialogflow.v2beta1.Intents.DeleteIntent].
Required. The name of the intent to delete. If this intent has direct or indirect followup intents, we also delete them. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Updates/Creates multiple intents in the specified agent. Operation <response: [BatchUpdateIntentsResponse][google.cloud.dialogflow.v2beta1.BatchUpdateIntentsResponse]>
The request message for [Intents.BatchUpdateIntents][google.cloud.dialogflow.v2beta1.Intents.BatchUpdateIntents].
Required. The name of the agent to update or create intents in. Format: `projects/<Project ID>/agent`.
Required. The source of the intent batch. For each intent in the batch: * If `name` is specified, we update an existing intent. * If `name` is not specified, we create a new intent.
The URI to a Google Cloud Storage file containing intents to update or create. The file format can either be a serialized proto (of IntentBatch type) or JSON object. Note: The URI must start with "gs://".
The collection of intents to update or create.
Optional. The language of training phrases, parameters and rich messages defined in `intents`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The mask to control which fields get updated.
Optional. The resource view to apply to the returned intent.
Deletes intents in the specified agent. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [Intents.BatchDeleteIntents][google.cloud.dialogflow.v2beta1.Intents.BatchDeleteIntents].
Required. The name of the agent to delete all entities types for. Format: `projects/<Project ID>/agent`.
Required. The collection of intents to delete. Only intent `name` must be filled in.
Manages knowledge bases. Allows users to setup and maintain knowledge bases with their knowledge data.
Returns the list of all knowledge bases of the specified agent. Note: The `projects.agent.knowledgeBases` resource is deprecated; only use `projects.knowledgeBases`.
Request message for [KnowledgeBases.ListKnowledgeBases][google.cloud.dialogflow.v2beta1.KnowledgeBases.ListKnowledgeBases].
Required. The project to list of knowledge bases for. Format: `projects/<Project ID>`.
Optional. The maximum number of items to return in a single page. By default 10 and at most 100.
Optional. The next_page_token value returned from a previous list request.
Response message for [KnowledgeBases.ListKnowledgeBases][google.cloud.dialogflow.v2beta1.KnowledgeBases.ListKnowledgeBases].
The list of knowledge bases.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Retrieves the specified knowledge base. Note: The `projects.agent.knowledgeBases` resource is deprecated; only use `projects.knowledgeBases`.
Request message for [KnowledgeBase.GetDocument][].
Required. The name of the knowledge base to retrieve. Format `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>`.
Creates a knowledge base. Note: The `projects.agent.knowledgeBases` resource is deprecated; only use `projects.knowledgeBases`.
Request message for [KnowledgeBases.CreateKnowledgeBase][google.cloud.dialogflow.v2beta1.KnowledgeBases.CreateKnowledgeBase].
Required. The project to create a knowledge base for. Format: `projects/<Project ID>`.
Required. The knowledge base to create.
Deletes the specified knowledge base. Note: The `projects.agent.knowledgeBases` resource is deprecated; only use `projects.knowledgeBases`.
Request message for [KnowledgeBases.DeleteKnowledgeBase][google.cloud.dialogflow.v2beta1.KnowledgeBases.DeleteKnowledgeBase].
Required. The name of the knowledge base to delete. Format: `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>`.
Optional. Force deletes the knowledge base. When set to true, any documents in the knowledge base are also deleted.
Updates the specified knowledge base. Note: The `projects.agent.knowledgeBases` resource is deprecated; only use `projects.knowledgeBases`.
Request message for [KnowledgeBases.UpdateKnowledgeBase][google.cloud.dialogflow.v2beta1.KnowledgeBases.UpdateKnowledgeBase].
Required. The knowledge base to update.
Optional. Not specified means `update all`. Currently, only `display_name` can be updated, an InvalidArgument will be returned for attempting to update other fields.
Entities are extracted from user input and represent parameters that are meaningful to your application. For example, a date range, a proper name such as a geographic location or landmark, and so on. Entities represent actionable data for your application. Session entity types are referred to as **User** entity types and are entities that are built for an individual user such as favorites, preferences, playlists, and so on. You can redefine a session entity type at the session level. Session entity methods do not work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration. For more information about entity types, see the [Dialogflow documentation](https://cloud.google.com/dialogflow/docs/entities-overview).
Returns the list of all session entity types in the specified session. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
The request message for [SessionEntityTypes.ListSessionEntityTypes][google.cloud.dialogflow.v2beta1.SessionEntityTypes.ListSessionEntityTypes].
Required. The session to list all session entity types from. Format: `projects/<Project ID>/agent/sessions/<Session ID>` or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/ sessions/<Session ID>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
Optional. The next_page_token value returned from a previous list request.
The response message for [SessionEntityTypes.ListSessionEntityTypes][google.cloud.dialogflow.v2beta1.SessionEntityTypes.ListSessionEntityTypes].
The list of session entity types. There will be a maximum number of items returned based on the page_size field in the request.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Retrieves the specified session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
The request message for [SessionEntityTypes.GetSessionEntityType][google.cloud.dialogflow.v2beta1.SessionEntityTypes.GetSessionEntityType].
Required. The name of the session entity type. Format: `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>` or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user.
Creates a session entity type. If the specified session entity type already exists, overrides the session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
The request message for [SessionEntityTypes.CreateSessionEntityType][google.cloud.dialogflow.v2beta1.SessionEntityTypes.CreateSessionEntityType].
Required. The session to create a session entity type for. Format: `projects/<Project ID>/agent/sessions/<Session ID>` or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/ sessions/<Session ID>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user.
Required. The session entity type to create.
Updates the specified session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
The request message for [SessionEntityTypes.UpdateSessionEntityType][google.cloud.dialogflow.v2beta1.SessionEntityTypes.UpdateSessionEntityType].
Required. The entity type to update. Format: `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>` or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user.
Optional. The mask to control which fields get updated.
Deletes the specified session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
The request message for [SessionEntityTypes.DeleteSessionEntityType][google.cloud.dialogflow.v2beta1.SessionEntityTypes.DeleteSessionEntityType].
Required. The name of the entity type to delete. Format: `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>` or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user.
A session represents an interaction with a user. You retrieve user input and pass it to the [DetectIntent][google.cloud.dialogflow.v2beta1.Sessions.DetectIntent] (or [StreamingDetectIntent][google.cloud.dialogflow.v2beta1.Sessions.StreamingDetectIntent]) method to determine user intent and respond.
Processes a natural language query and returns structured, actionable data as a result. This method is not idempotent, because it may cause contexts and session entity types to be updated, which in turn might affect results of future queries.
The request to detect user's intent.
Required. The name of the session this query is sent to. Format: `projects/<Project ID>/agent/sessions/<Session ID>`, or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we are using "-". It's up to the API caller to choose an appropriate `Session ID` and `User Id`. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the `Session ID` and `User ID` must not exceed 36 characters.
Optional. The parameters of this query.
Required. The input specification. It can be set to: 1. an audio config which instructs the speech recognizer how to process the speech audio, 2. a conversational query in the form of text, or 3. an event that specifies which intent to trigger.
Optional. Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
Optional. The natural language speech audio to be processed. This field should be populated iff `query_input` is set to an input audio config. A single request can contain up to 1 minute of speech audio data.
The message returned from the DetectIntent method.
The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.
The selected results of the conversational query or event processing. See `alternative_query_results` for additional potential results.
If Knowledge Connectors are enabled, there could be more than one result returned for a given query or event, and this field will contain all results except for the top one, which is captured in query_result. The alternative results are ordered by decreasing `QueryResult.intent_detection_confidence`. If Knowledge Connectors are disabled, this field will be empty until multiple responses for regular intents are supported, at which point those additional results will be surfaced here.
Specifies the status of the webhook request.
The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the `query_result.fulfillment_messages` field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty.
The config used by the speech synthesizer to generate the output audio.
Processes a natural language query in audio format in a streaming fashion and returns structured, actionable data as a result. This method is only available via the gRPC API (not REST).
The top-level message sent by the client to the [StreamingDetectIntent][] method. Multiple request messages should be sent in order: 1. The first message must contain [StreamingDetectIntentRequest.session][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.session], [StreamingDetectIntentRequest.query_input] plus optionally [StreamingDetectIntentRequest.query_params]. If the client wants to receive an audio response, it should also contain [StreamingDetectIntentRequest.output_audio_config][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.output_audio_config]. The message must not contain [StreamingDetectIntentRequest.input_audio][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.input_audio]. 2. If [StreamingDetectIntentRequest.query_input][google.cloud.dialogflow.v2beta1.StreamingDetectIntentRequest.query_input] was set to [StreamingDetectIntentRequest.query_input.audio_config][], all subsequent messages must contain [StreamingDetectIntentRequest.input_audio] to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with [StreamingDetectIntentRequest.query_input.text][]. However, note that: * Dialogflow will bill you for the audio duration so far. * Dialogflow discards all Speech recognition results in favor of the input text. * Dialogflow will use the language code from the first message. After you sent all input, you must half-close or abort the request stream.
Required. The name of the session the query is sent to. Format of the session name: `projects/<Project ID>/agent/sessions/<Session ID>`, or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we are using "-". It's up to the API caller to choose an appropriate `Session ID` and `User Id`. They can be a random number or some type of user and session identifiers (preferably hashed). The length of the `Session ID` and `User ID` must not exceed 36 characters.
Optional. The parameters of this query.
Required. The input specification. It can be set to: 1. an audio config which instructs the speech recognizer how to process the speech audio, 2. a conversational query in the form of text, or 3. an event that specifies which intent to trigger.
DEPRECATED. Please use [InputAudioConfig.single_utterance][google.cloud.dialogflow.v2beta1.InputAudioConfig.single_utterance] instead. Optional. If `false` (default), recognition does not cease until the client closes the stream. If `true`, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored when `query_input` is a piece of text or an event.
Optional. Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
Optional. The input audio content to be recognized. Must be sent if `query_input` was set to a streaming input audio config. The complete audio over all streaming messages must not exceed 1 minute.
The top-level message returned from the `StreamingDetectIntent` method. Multiple response messages can be returned in order: 1. If the input was set to streaming audio, the first one or more messages contain `recognition_result`. Each `recognition_result` represents a more complete transcript of what the user said. The last `recognition_result` has `is_final` set to `true`. 2. The next message contains `response_id`, `query_result`, `alternative_query_results` and optionally `webhook_status` if a WebHook was called. 3. If `output_audio_config` was specified in the request or agent-level speech synthesizer is configured, all subsequent messages contain `output_audio` and `output_audio_config`.
The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.
The result of speech recognition.
The selected results of the conversational query or event processing. See `alternative_query_results` for additional potential results.
If Knowledge Connectors are enabled, there could be more than one result returned for a given query or event, and this field will contain all results except for the top one, which is captured in query_result. The alternative results are ordered by decreasing `QueryResult.intent_detection_confidence`. If Knowledge Connectors are disabled, this field will be empty until multiple responses for regular intents are supported, at which point those additional results will be surfaced here.
Specifies the status of the webhook request.
The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the `query_result.fulfillment_messages` field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty.
The config used by the speech synthesizer to generate the output audio.
Represents a conversational agent.
Used as response type in: Agents.GetAgent, Agents.SetAgent
Used as field type in:
,Required. The project of this agent. Format: `projects/<Project ID>`.
Required. The name of this agent.
Required. The default language of the agent as a language tag. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. This field cannot be set by the `Update` method.
Optional. The list of all languages supported by this agent (except for the `default_language_code`).
Required. The time zone of this agent from the [time zone database](https://www.iana.org/time-zones), e.g., America/New_York, Europe/Paris.
Optional. The description of this agent. The maximum length is 500 characters. If exceeded, the request is rejected.
Optional. The URI of the agent's avatar. Avatars are used throughout the Dialogflow console and in the self-hosted [Web Demo](https://cloud.google.com/dialogflow/docs/integrations/web-demo) integration.
Optional. Determines whether this agent should log conversation queries.
Optional. Determines how intents are detected from user queries.
Optional. To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. If the returned score value is less than the threshold value, then a fallback intent will be triggered or, if there are no fallback intents defined, no intent will be triggered. The score values range from 0.0 (completely uncertain) to 1.0 (completely certain). If set to 0.0, the default of 0.3 is used.
Optional. API version displayed in Dialogflow console. If not specified, V2 API is assumed. Clients are free to query different service endpoints for different API versions. However, bots connectors and webhook calls will follow the specified API version.
Optional. The agent tier. If not specified, TIER_STANDARD is assumed.
API version for the agent.
Used in:
Not specified.
Legacy V1 API.
V2 API.
V2beta1 API.
Match mode determines how intents are detected from user queries.
Used in:
Not specified.
Best for agents with a small number of examples in intents and/or wide use of templates syntax and composite entities.
Can be used for agents with a large number of examples in intents, especially the ones using @sys.any or very large developer entities.
Represents the agent tier.
Used in:
Not specified. This value should never be used.
Standard tier.
Enterprise tier (Essentials).
Enterprise tier (Plus).
Audio encoding of the audio content sent in the conversational query request. Refer to the [Cloud Speech API documentation](https://cloud.google.com/speech-to-text/docs/basics) for more details.
Used in:
Not specified.
Uncompressed 16-bit signed little-endian samples (Linear PCM).
[`FLAC`](https://xiph.org/flac/documentation.html) (Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth of `LINEAR16`. `FLAC` stream encoding supports 16-bit and 24-bit samples, however, not all fields in `STREAMINFO` are supported.
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
Adaptive Multi-Rate Narrowband codec. `sample_rate_hertz` must be 8000.
Adaptive Multi-Rate Wideband codec. `sample_rate_hertz` must be 16000.
Opus encoded audio frames in Ogg container ([OggOpus](https://wiki.xiph.org/OggOpus)). `sample_rate_hertz` must be 16000.
Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, `OGG_OPUS` is highly preferred over Speex encoding. The [Speex](https://speex.org/) encoding supported by Dialogflow API has a header byte in each block, as in MIME type `audio/x-speex-with-header-byte`. It is a variant of the RTP Speex encoding defined in [RFC 5574](https://tools.ietf.org/html/rfc5574). The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. `sample_rate_hertz` must be 16000.
The response message for [EntityTypes.BatchUpdateEntityTypes][google.cloud.dialogflow.v2beta1.EntityTypes.BatchUpdateEntityTypes].
The collection of updated or created entity types.
The response message for [Intents.BatchUpdateIntents][google.cloud.dialogflow.v2beta1.Intents.BatchUpdateIntents].
The collection of updated or created intents.
Represents a context.
Used as response type in: Contexts.CreateContext, Contexts.GetContext, Contexts.UpdateContext
Used as field type in:
, , , , , ,Required. The unique identifier of the context. Format: `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`, or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/contexts/<Context ID>`. The `Context ID` is always converted to lowercase, may only contain characters in a-zA-Z0-9_-% and may be at most 250 bytes long. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user.
Optional. The number of conversational query requests after which the context expires. If set to `0` (the default) the context expires immediately. Contexts expire automatically after 20 minutes if there are no matching queries.
Optional. The collection of parameters associated with this context. Refer to [this doc](https://cloud.google.com/dialogflow/docs/intents-actions-parameters) for syntax.
A document resource. Note: The `projects.agent.knowledgeBases.documents` resource is deprecated; only use `projects.knowledgeBases.documents`.
Used as response type in: Documents.GetDocument
Used as field type in:
, ,The document resource name. The name must be empty when creating a document. Format: `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>`.
Required. The display name of the document. The name must be 1024 bytes or less; otherwise, the creation request fails.
Required. The MIME type of this document.
Required. The knowledge type of document content.
Required. The source of this document.
The URI where the file content is located. For documents stored in Google Cloud Storage, these URIs must have the form `gs://<bucket-name>/<object-name>`. NOTE: External URLs must correspond to public webpages, i.e., they must be indexed by Google Search. In particular, URLs for showing documents in Google Cloud Storage (i.e. the URL in your browser) are not supported. Instead use the `gs://` format URI described above.
The raw content of the document. This field is only permitted for EXTRACTIVE_QA and FAQ knowledge types. Note: This field is in the process of being deprecated, please use raw_content instead.
The raw content of the document. This field is only permitted for EXTRACTIVE_QA and FAQ knowledge types.
The knowledge type of document content.
Used in:
The type is unspecified or arbitrary.
The document content contains question and answer pairs as either HTML or CSV. Typical FAQ HTML formats are parsed accurately, but unusual formats may fail to be parsed. CSV must have questions in the first column and answers in the second, with no header. Because of this explicit format, they are always parsed accurately.
Documents for which unstructured text is extracted and used for question answering.
Represents an entity type. Entity types serve as a tool for extracting parameter values from natural language queries.
Used as response type in: EntityTypes.CreateEntityType, EntityTypes.GetEntityType, EntityTypes.UpdateEntityType
Used as field type in:
, , , ,The unique identifier of the entity type. Required for [EntityTypes.UpdateEntityType][google.cloud.dialogflow.v2beta1.EntityTypes.UpdateEntityType] and [EntityTypes.BatchUpdateEntityTypes][google.cloud.dialogflow.v2beta1.EntityTypes.BatchUpdateEntityTypes] methods. Format: `projects/<Project ID>/agent/entityTypes/<Entity Type ID>`.
Required. The name of the entity type.
Required. Indicates the kind of entity type.
Optional. Indicates whether the entity type can be automatically expanded.
Optional. The collection of entity entries associated with the entity type.
Optional. Enables fuzzy entity extraction during classification.
Represents different entity type expansion modes. Automated expansion allows an agent to recognize values that have not been explicitly listed in the entity (for example, new kinds of shopping list items).
Used in:
Auto expansion disabled for the entity.
Allows an agent to recognize values that have not been explicitly listed in the entity.
An **entity entry** for an associated entity type.
Used in:
, , ,Required. The primary value associated with this entity entry. For example, if the entity type is *vegetable*, the value could be *scallions*. For `KIND_MAP` entity types: * A canonical value to be used in place of synonyms. For `KIND_LIST` entity types: * A string that can contain references to other entity types (with or without aliases).
Required. A collection of value synonyms. For example, if the entity type is *vegetable*, and `value` is *scallions*, a synonym could be *green onions*. For `KIND_LIST` entity types: * This collection must contain exactly one synonym equal to `value`.
Represents kinds of entities.
Used in:
Not specified. This value should be never used.
Map entity types allow mapping of a group of synonyms to a canonical value.
List entity types contain a set of entries that do not map to canonical values. However, list entity types can contain references to other entity types (with or without aliases).
Regexp entity types allow to specify regular expressions in entries values.
This message is a wrapper around a collection of entity types.
Used in:
A collection of entity types.
Events allow for matching intents by event name instead of the natural language input. For instance, input `<event: { name: "welcome_event", parameters: { name: "Sam" } }>` can trigger a personalized welcome response. The parameter `name` may be used by the agent in the response: `"Hello #welcome_event.name! What can I do for you today?"`.
Used in:
,Required. The unique identifier of the event.
Optional. The collection of parameters associated with the event.
Required. The language of this query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
The response message for [Agents.ExportAgent][google.cloud.dialogflow.v2beta1.Agents.ExportAgent].
The exported agent.
The URI to a file containing the exported agent. This field is populated only if `agent_uri` is specified in `ExportAgentRequest`.
Zip compressed raw byte content for agent.
Google Cloud Storage location for single input.
Used in:
Required. The Google Cloud Storage URIs for the inputs. A URI is of the form: gs://bucket/object-prefix-or-name Whether a prefix or name is used depends on the use case.
Instructs the speech recognizer on how to process the audio content.
Used in:
Required. Audio encoding of the audio content to process.
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to [Cloud Speech API documentation](https://cloud.google.com/speech-to-text/docs/basics) for more details.
Required. The language of the supplied audio. Dialogflow does not do translations. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
Optional. If `true`, Dialogflow returns [SpeechWordInfo][google.cloud.dialogflow.v2beta1.SpeechWordInfo] in [StreamingRecognitionResult][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult] with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See [the Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints) for more details.
Optional. Context information to assist speech recognition. See [the Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints) for more details.
Optional. Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to [Cloud Speech API documentation](https://cloud.google.com/speech-to-text/docs/basics#select-model) for more details.
Optional. Which variant of the [Speech model][google.cloud.dialogflow.v2beta1.InputAudioConfig.model] to use.
Optional. If `false` (default), recognition does not cease until the client closes the stream. If `true`, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.
Represents an intent. Intents convert a number of user expressions or patterns into an action. An action is an extraction of a user command or sentence semantics.
Used as response type in: Intents.CreateIntent, Intents.GetIntent, Intents.UpdateIntent
Used as field type in:
, , , , , ,The unique identifier of this intent. Required for [Intents.UpdateIntent][google.cloud.dialogflow.v2beta1.Intents.UpdateIntent] and [Intents.BatchUpdateIntents][google.cloud.dialogflow.v2beta1.Intents.BatchUpdateIntents] methods. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Required. The name of this intent.
Optional. Indicates whether webhooks are enabled for the intent.
Optional. The priority of this intent. Higher numbers represent higher priorities. If this is zero or unspecified, we use the default priority 500000. Negative numbers mean that the intent is disabled.
Optional. Indicates whether this is a fallback intent.
Optional. Indicates whether Machine Learning is enabled for the intent. Note: If `ml_enabled` setting is set to false, then this intent is not taken into account during inference in `ML ONLY` match mode. Also, auto-markup in the UI is turned off. DEPRECATED! Please use `ml_disabled` field instead. NOTE: If both `ml_enabled` and `ml_disabled` are either not set or false, then the default value is determined as follows: - Before April 15th, 2018 the default is: ml_enabled = false / ml_disabled = true. - After April 15th, 2018 the default is: ml_enabled = true / ml_disabled = false.
Optional. Indicates whether Machine Learning is disabled for the intent. Note: If `ml_disabled` setting is set to true, then this intent is not taken into account during inference in `ML ONLY` match mode. Also, auto-markup in the UI is turned off.
Optional. Indicates that this intent ends an interaction. Some integrations (e.g., Actions on Google or Dialogflow phone gateway) use this information to close interaction with an end user. Default is false.
Optional. The list of context names required for this intent to be triggered. Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
Optional. The collection of event names that trigger the intent. If the collection of input contexts is not empty, all of the contexts must be present in the active user session for an event to trigger this intent.
Optional. The collection of examples that the agent is trained on.
Optional. The name of the action associated with the intent. Note: The action name must not contain whitespaces.
Optional. The collection of contexts that are activated when the intent is matched. Context messages in this collection should not set the parameters field. Setting the `lifespan_count` to 0 will reset the context when the intent is matched. Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
Optional. Indicates whether to delete all contexts in the current session when this intent is matched.
Optional. The collection of parameters associated with the intent.
Optional. The collection of rich messages corresponding to the `Response` field in the Dialogflow console.
Optional. The list of platforms for which the first responses will be copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform).
Read-only. The unique identifier of the root intent in the chain of followup intents. It identifies the correct followup intents chain for this intent. We populate this field only in the output. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Read-only after creation. The unique identifier of the parent intent in the chain of followup intents. You can set this field when creating an intent, for example with [CreateIntent][] or [BatchUpdateIntents][], in order to make this intent a followup intent. It identifies the parent followup intent. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Read-only. Information about all followup intents that have this intent as a direct or indirect parent. We populate this field only in the output.
Represents a single followup intent in the chain.
Used in:
The unique identifier of the followup intent. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
The unique identifier of the followup intent's parent. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Corresponds to the `Response` field in the Dialogflow console.
Used in:
, ,Required. The rich response message.
Returns a text response.
Displays an image.
Displays quick replies.
Displays a card.
Returns a response containing a custom, platform-specific payload. See the Intent.Message.Platform type for a description of the structure that may be required for your platform.
Returns a voice or text-only response for Actions on Google.
Displays a basic card for Actions on Google.
Displays suggestion chips for Actions on Google.
Displays a link out suggestion chip for Actions on Google.
Displays a list card for Actions on Google.
Displays a carousel card for Actions on Google.
Plays audio from a file in Telephony Gateway.
Synthesizes speech in Telephony Gateway.
Transfers the call in Telephony Gateway.
Rich Business Messaging (RBM) text response. RBM allows businesses to send enriched and branded versions of SMS. See https://jibe.google.com/business-messaging.
Standalone Rich Business Messaging (RBM) rich card response.
Rich Business Messaging (RBM) carousel rich card response.
Browse carousel card for Actions on Google.
Table card for Actions on Google.
The media content card for Actions on Google.
Optional. The platform that this message is intended for.
The basic card message. Useful for displaying information.
Used in:
Optional. The title of the card.
Optional. The subtitle of the card.
Required, unless image is present. The body text of the card.
Optional. The image for the card.
Optional. The collection of card buttons.
The button object that appears at the bottom of a card.
Used in:
,Required. The title of the button.
Required. Action to take when a user taps on the button.
Opens the given URI.
Used in:
Required. The HTTP or HTTPS scheme URI.
Browse Carousel Card for Actions on Google. https://developers.google.com/actions/assistant/responses#browsing_carousel
Used in:
Required. List of items in the Browse Carousel Card. Minimum of two items, maximum of ten.
Optional. Settings for displaying the image. Applies to every image in [items][google.cloud.dialogflow.v2beta1.Intent.Message.BrowseCarouselCard.items].
Browsing carousel tile
Used in:
Required. Action to present to the user.
Required. Title of the carousel item. Maximum of two lines of text.
Optional. Description of the carousel item. Maximum of four lines of text.
Optional. Hero image for the carousel item.
Optional. Text that appears at the bottom of the Browse Carousel Card. Maximum of one line of text.
Actions on Google action to open a given url.
Used in:
Required. URL
Optional. Specifies the type of viewer that is used when opening the URL. Defaults to opening via web browser.
Type of the URI.
Used in:
Unspecified
Url would be an amp action
URL that points directly to AMP content, or to a canonical URL which refers to AMP content via <link rel="amphtml">.
Image display options for Actions on Google. This should be used for when the image's aspect ratio does not match the image container's aspect ratio.
Used in:
Fill the gaps between the image and the image container with gray bars.
Fill the gaps between the image and the image container with gray bars.
Fill the gaps between the image and the image container with white bars.
Image is scaled such that the image width and height match or exceed the container dimensions. This may crop the top and bottom of the image if the scaled image height is greater than the container height, or crop the left and right of the image if the scaled image width is greater than the container width. This is similar to "Zoom Mode" on a widescreen TV when playing a 4:3 video.
Pad the gaps between image and image frame with a blurred copy of the same image.
The card response message.
Used in:
Optional. The title of the card.
Optional. The subtitle of the card.
Optional. The public URI to an image file for the card.
Optional. The collection of card buttons.
Optional. Contains information about a button.
Used in:
Optional. The text to show on the button.
Optional. The text to send back to the Dialogflow API or a URI to open.
The card for presenting a carousel of options to select from.
Used in:
Required. Carousel items.
An item in the carousel.
Used in:
Required. Additional info about the option item.
Required. Title of the carousel item.
Optional. The body text of the card.
Optional. The image to display.
Column properties for [TableCard][google.cloud.dialogflow.v2beta1.Intent.Message.TableCard].
Used in:
Required. Column heading.
Optional. Defines text alignment for all cells in this column.
Text alignments within a cell.
Used in:
Text is aligned to the leading edge of the column.
Text is aligned to the leading edge of the column.
Text is centered in the column.
Text is aligned to the trailing edge of the column.
The image response message.
Used in:
, , , , , ,Optional. The public URI to an image file.
A text description of the image to be used for accessibility, e.g., screen readers. Required if image_uri is set for CarouselSelect.
The suggestion chip message that allows the user to jump out to the app or website associated with this agent.
Used in:
Required. The name of the app or site this chip is linking to.
Required. The URI of the app or site to open when the user taps the suggestion chip.
The card for presenting a list of options to select from.
Used in:
Optional. The overall title of the list.
Required. List items.
An item in the list.
Used in:
Required. Additional information about this option.
Required. The title of the list item.
Optional. The main text describing the item.
Optional. The image to display.
The media content card for Actions on Google.
Used in:
Optional. What type of media is the content (ie "audio").
Required. List of media objects.
Response media object for media content card.
Used in:
Required. Name of media card.
Optional. Description of media card.
Image to show with the media card.
Optional. Image to display above media content.
Optional. Icon to display above media content.
Required. Url where the media is stored.
Format of response media type.
Used in:
Unspecified.
Response media type is audio.
Represents different platforms that a rich message can be intended for.
Used in:
,Not specified.
Facebook.
Slack.
Telegram.
Kik.
Skype.
Line.
Viber.
Actions on Google. When using Actions on Google, you can choose one of the specific Intent.Message types that mention support for Actions on Google, or you can use the advanced Intent.Message.payload field. The payload field provides access to AoG features not available in the specific message types. If using the Intent.Message.payload field, it should have a structure similar to the JSON message shown here. For more information, see [Actions on Google Webhook Format](https://developers.google.com/actions/dialogflow/webhook) <pre>{ "expectUserResponse": true, "isSsml": false, "noInputPrompts": [], "richResponse": { "items": [ { "simpleResponse": { "displayText": "hi", "textToSpeech": "hello" } } ], "suggestions": [ { "title": "Say this" }, { "title": "or this" } ] }, "systemIntent": { "data": { "@type": "type.googleapis.com/google.actions.v2.OptionValueSpec", "listSelect": { "items": [ { "optionInfo": { "key": "key1", "synonyms": [ "key one" ] }, "title": "must not be empty, but unique" }, { "optionInfo": { "key": "key2", "synonyms": [ "key two" ] }, "title": "must not be empty, but unique" } ] } }, "intent": "actions.intent.OPTION" } }</pre>
Telephony Gateway.
Google Hangouts.
The quick replies response message.
Used in:
Optional. The title of the collection of quick replies.
Optional. The collection of quick replies.
Rich Business Messaging (RBM) Card content
Used in:
,Optional. Title of the card (at most 200 bytes). At least one of the title, description or media must be set.
Optional. Description of the card (at most 2000 bytes). At least one of the title, description or media must be set.
Optional. However at least one of the title, description or media must be set. Media (image, GIF or a video) to include in the card.
Optional. List of suggestions to include in the card.
Rich Business Messaging (RBM) Media displayed in Cards The following media-types are currently supported: ## Image Types image/jpeg image/jpg' image/gif image/png ## Video Types video/h263 video/m4v video/mp4 video/mpeg video/mpeg4 video/webm
Used in:
Required. Publicly reachable URI of the file. The RBM platform determines the MIME type of the file from the content-type field in the HTTP headers when the platform fetches the file. The content-type field must be present and accurate in the HTTP response from the URL.
Optional. Publicly reachable URI of the thumbnail.If you don't provide a thumbnail URI, the RBM platform displays a blank placeholder thumbnail until the user's device downloads the file. Depending on the user's setting, the file may not download automatically and may require the user to tap a download button.
Required for cards with vertical orientation. The height of the media within a rich card with a vertical layout. (https://goo.gl/NeFCjz). For a standalone card with horizontal layout, height is not customizable, and this field is ignored.
Media height
Used in:
Not specified.
112 DP.
168 DP.
264 DP. Not available for rich card carousels when the card width is set to small.
Carousel Rich Business Messaging (RBM) rich card. Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions. For more details about RBM rich cards, please see: https://developers.google.com/rcs-business-messaging/rbm/guides/build/send-messages#rich-cards. If you want to show a single card with more control over the layout, please use [RbmStandaloneCard][google.cloud.dialogflow.v2beta1.Intent.Message.RbmStandaloneCard] instead.
Used in:
Required. The width of the cards in the carousel.
Required. The cards in the carousel. A carousel must have at least 2 cards and at most 10.
The width of the cards in the carousel.
Used in:
Not specified.
120 DP. Note that tall media cannot be used.
232 DP.
Standalone Rich Business Messaging (RBM) rich card. Rich cards allow you to respond to users with more vivid content, e.g. with media and suggestions. For more details about RBM rich cards, please see: https://developers.google.com/rcs-business-messaging/rbm/guides/build/send-messages#rich-cards. You can group multiple rich cards into one using [RbmCarouselCard][google.cloud.dialogflow.v2beta1.Intent.Message.RbmCarouselCard] but carousel cards will give you less control over the card layout.
Used in:
Required. Orientation of the card.
Required if orientation is horizontal. Image preview alignment for standalone cards with horizontal layout.
Required. Card content.
Orientation of the card.
Used in:
Not specified.
Horizontal layout.
Vertical layout.
Thumbnail preview alignment for standalone cards with horizontal layout.
Used in:
Not specified.
Thumbnail preview is left-aligned.
Thumbnail preview is right-aligned.
Rich Business Messaging (RBM) suggested client-side action that the user can choose from the card.
Used in:
Text to display alongside the action.
Opaque payload that the Dialogflow receives in a user event when the user taps the suggested action. This data will be also forwarded to webhook to allow performing custom business logic.
Action that needs to be triggered.
Suggested client side action: Dial a phone number
Suggested client side action: Open a URI on device
Suggested client side action: Share user location
Opens the user's default dialer app with the specified phone number but does not dial automatically (https://goo.gl/ergbB2).
Used in:
Required. The phone number to fill in the default dialer app. This field should be in [E.164](https://en.wikipedia.org/wiki/E.164) format. An example of a correctly formatted phone number: +15556767888.
Opens the user's default web browser app to the specified uri (https://goo.gl/6GLJD2). If the user has an app installed that is registered as the default handler for the URL, then this app will be opened instead, and its icon will be used in the suggested action UI.
Used in:
Required. The uri to open on the user device
Opens the device's location chooser so the user can pick a location to send back to the agent (https://goo.gl/GXotJW).
Used in:
(message has no fields)
Rich Business Messaging (RBM) suggested reply that the user can click instead of typing in their own response.
Used in:
Suggested reply text.
Opaque payload that the Dialogflow receives in a user event when the user taps the suggested reply. This data will be also forwarded to webhook to allow performing custom business logic.
Rich Business Messaging (RBM) suggestion. Suggestions allow user to easily select/click a predefined response or perform an action (like opening a web uri).
Used in:
,Predefined suggested response or action for user to choose
Predefined replies for user to select instead of typing
Predefined client side actions that user can choose
Rich Business Messaging (RBM) text response with suggestions.
Used in:
Required. Text sent and displayed to the user.
Optional. One or more suggestions to show to the user.
Additional info about the select item for when it is triggered in a dialog.
Used in:
,Required. A unique key that will be sent back to the agent if this response is given.
Optional. A list of synonyms that can also be used to trigger this item in dialog.
The simple response message containing speech or text.
Used in:
One of text_to_speech or ssml must be provided. The plain text of the speech output. Mutually exclusive with ssml.
One of text_to_speech or ssml must be provided. Structured spoken response to the user in the SSML format. Mutually exclusive with text_to_speech.
Optional. The text to display.
The collection of simple response candidates. This message in `QueryResult.fulfillment_messages` and `WebhookResponse.fulfillment_messages` should contain only one `SimpleResponse`.
Used in:
Required. The list of simple responses.
The suggestion chip message that the user can tap to quickly post a reply to the conversation.
Used in:
Required. The text shown the in the suggestion chip.
The collection of suggestions.
Used in:
Required. The list of suggested replies.
Table card for Actions on Google.
Used in:
Required. Title of the card.
Optional. Subtitle to the title.
Optional. Image which should be displayed on the card.
Optional. Display properties for the columns in this table.
Optional. Rows in this table of data.
Optional. List of buttons for the card.
Cell of [TableCardRow][google.cloud.dialogflow.v2beta1.Intent.Message.TableCardRow].
Used in:
Required. Text in this cell.
Row of [TableCard][google.cloud.dialogflow.v2beta1.Intent.Message.TableCard].
Used in:
Optional. List of cells that make up this row.
Optional. Whether to add a visual divider after this row.
Plays audio from a file in Telephony Gateway.
Used in:
Required. URI to a Google Cloud Storage object containing the audio to play, e.g., "gs://bucket/object". The object must contain a single channel (mono) of linear PCM audio (2 bytes / sample) at 8kHz. This object must be readable by the `service-<Project Number>@gcp-sa-dialogflow.iam.gserviceaccount.com` service account where <Project Number> is the number of the Telephony Gateway project (usually the same as the Dialogflow agent project). If the Google Cloud Storage bucket is in the Telephony Gateway project, this permission is added by default when enabling the Dialogflow V2 API. For audio from other sources, consider using the `TelephonySynthesizeSpeech` message with SSML.
Synthesizes speech and plays back the synthesized audio to the caller in Telephony Gateway. Telephony Gateway takes the synthesizer settings from `DetectIntentResponse.output_audio_config` which can either be set at request-level or can come from the agent-level synthesizer config.
Used in:
Required. The source to be synthesized.
The raw text to be synthesized.
The SSML to be synthesized. For more information, see [SSML](https://developers.google.com/actions/reference/ssml).
Transfers the call in Telephony Gateway.
Used in:
Required. The phone number to transfer the call to in [E.164 format](https://en.wikipedia.org/wiki/E.164). We currently only allow transferring to US numbers (+1xxxyyyzzzz).
The text response message.
Used in:
Optional. The collection of the agent's responses.
Represents intent parameters.
Used in:
The unique identifier of this parameter.
Required. The name of the parameter.
Optional. The definition of the parameter value. It can be: - a constant string, - a parameter value defined as `$parameter_name`, - an original parameter value defined as `$parameter_name.original`, - a parameter value from some context defined as `#context_name.parameter_name`.
Optional. The default value to use when the `value` yields an empty result. Default values can be extracted from contexts by using the following syntax: `#context_name.parameter_name`.
Optional. The name of the entity type, prefixed with `@`, that describes values of the parameter. If the parameter is required, this must be provided.
Optional. Indicates whether the parameter is required. That is, whether the intent cannot be completed without collecting the parameter value.
Optional. The collection of prompts that the agent can present to the user in order to collect a value for the parameter.
Optional. Indicates whether the parameter represents a list of values.
Represents an example that the agent is trained on.
Used in:
Output only. The unique identifier of this training phrase.
Required. The type of the training phrase.
Required. The ordered list of training phrase parts. The parts are concatenated in order to form the training phrase. Note: The API does not automatically annotate training phrases like the Dialogflow Console does. Note: Do not forget to include whitespace at part boundaries, so the training phrase is well formatted when the parts are concatenated. If the training phrase does not need to be annotated with parameters, you just need a single part with only the [Part.text][google.cloud.dialogflow.v2beta1.Intent.TrainingPhrase.Part.text] field set. If you want to annotate the training phrase, you must create multiple parts, where the fields of each part are populated in one of two ways: - `Part.text` is set to a part of the phrase that has no parameters. - `Part.text` is set to a part of the phrase that you want to annotate, and the `entity_type`, `alias`, and `user_defined` fields are all set.
Optional. Indicates how many times this example was added to the intent. Each time a developer adds an existing sample by editing an intent or training, this counter is increased.
Represents a part of a training phrase.
Used in:
Required. The text for this part.
Optional. The entity type name prefixed with `@`. This field is required for annotated parts of the training phrase.
Optional. The parameter name for the value extracted from the annotated part of the example. This field is required for annotated parts of the training phrase.
Optional. Indicates whether the text was manually annotated. This field is set to true when the Dialogflow Console is used to manually annotate the part. When creating an annotated part with the API, you must set this to true.
Represents different types of training phrases.
Used in:
Not specified. This value should never be used.
Examples do not contain @-prefixed entity type names, but example parts can be annotated with entity types.
Templates are not annotated with entity types, but they can contain @-prefixed entity type names as substrings. Template mode has been deprecated. Example mode is the only supported way to create new training phrases. If you have existing training phrases that you've created in template mode, those will continue to work.
Represents the different states that webhooks can be in.
Used in:
Webhook is disabled in the agent and in the intent.
Webhook is enabled in the agent and in the intent.
Webhook is enabled in the agent and in the intent. Also, each slot filling prompt is forwarded to the webhook.
This message is a wrapper around a collection of intents.
Used in:
A collection of intents.
Represents the options for views of an intent. An intent can be a sizable object. Therefore, we provide a resource view that does not return training phrases in the response by default.
Used in:
, , , ,Training phrases field is not populated in the response.
All fields are populated.
Represents the result of querying a Knowledge base.
Used in:
A list of answers from Knowledge Connector.
An answer from Knowledge Connector.
Used in:
Indicates which Knowledge Document this answer was extracted from. Format: `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>/documents/<Document ID>`.
The corresponding FAQ question if the answer was extracted from a FAQ Document, empty otherwise.
The piece of text from the `source` knowledge base document that answers this conversational query.
The system's confidence level that this knowledge answer is a good match for this conversational query. NOTE: The confidence level for a given `<query, answer>` pair may change without notice, as it depends on models that are constantly being improved. However, it will change less frequently than the confidence score below, and should be preferred for referencing the quality of an answer.
The system's confidence score that this Knowledge answer is a good match for this conversational query. The range is from 0.0 (completely uncertain) to 1.0 (completely certain). Note: The confidence score is likely to vary somewhat (possibly even for identical requests), as the underlying model is under constant improvement. It may be deprecated in the future. We recommend using `match_confidence_level` which should be generally more stable.
Represents the system's confidence that this knowledge answer is a good match for this conversational query.
Used in:
Not specified.
Indicates that the confidence is low.
Indicates our confidence is medium.
Indicates our confidence is high.
Represents knowledge base resource. Note: The `projects.agent.knowledgeBases` resource is deprecated; only use `projects.knowledgeBases`.
Used as response type in: KnowledgeBases.CreateKnowledgeBase, KnowledgeBases.GetKnowledgeBase, KnowledgeBases.UpdateKnowledgeBase
Used as field type in:
, ,The knowledge base resource name. The name must be empty when creating a knowledge base. Format: `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>`.
Required. The display name of the knowledge base. The name must be 1024 bytes or less; otherwise, the creation request fails.
Language which represents the KnowledgeBase. When the KnowledgeBase is created/updated, this is populated for all non en-us languages. If not populated, the default language en-us applies.
Metadata in google::longrunning::Operation for Knowledge operations.
Required. The current state of this operation.
States of the operation.
Used in:
State unspecified.
The operation has been created.
The operation is currently running.
The operation is done, either cancelled or completed.
Represents the contents of the original request that was passed to the `[Streaming]DetectIntent` call.
Used in:
The source of this request, e.g., `google`, `facebook`, `slack`. It is set by Dialogflow-owned servers.
Optional. The version of the protocol used for this request. This field is AoG-specific.
Optional. This field is set to the value of the `QueryParameters.payload` field passed in the request. Some integrations that query a Dialogflow agent may provide additional information in the payload. In particular for the Telephony Gateway this field has the form: <pre>{ "telephony": { "caller_id": "+18558363987" } }</pre> Note: The caller ID field (`caller_id`) will be redacted for Standard Edition agents and populated with the caller ID in [E.164 format](https://en.wikipedia.org/wiki/E.164) for Enterprise Edition agents.
Instructs the speech synthesizer how to generate the output audio content.
Used in:
, , ,Required. Audio encoding of the synthesized audio content.
Optional. The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).
Optional. Configuration of how speech should be synthesized.
Audio encoding of the output audio format in Text-To-Speech.
Used in:
Not specified.
Uncompressed 16-bit signed little-endian samples (Linear PCM). Audio content returned as LINEAR16 also contains a WAV header.
MP3 audio.
Opus encoded audio wrapped in an ogg container. The result will be a file which can be played natively on Android, and in browsers (at least Chrome and Firefox). The quality of the encoding is considerably higher than MP3 while using approximately the same bitrate.
Represents the query input. It can contain either: 1. An audio config which instructs the speech recognizer how to process the speech audio. 2. A conversational query in the form of text. 3. An event that specifies which intent to trigger.
Used in:
,Required. The input specification.
Instructs the speech recognizer how to process the speech audio.
The natural language text to be processed.
The event to be processed.
Represents the parameters of the conversational query.
Used in:
,Optional. The time zone of this conversational query from the [time zone database](https://www.iana.org/time-zones), e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in agent settings is used.
Optional. The geo location of this conversational query.
Optional. The collection of contexts to be activated before this query is executed.
Optional. Specifies whether to delete all contexts in the current session before the new ones are activated.
Optional. Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query.
Optional. This field can be used to pass custom data into the webhook associated with the agent. Arbitrary JSON objects are supported.
Optional. KnowledgeBases to get alternative results from. If not set, the KnowledgeBases enabled in the agent (through UI) will be used. Format: `projects/<Project ID>/knowledgeBases/<Knowledge Base ID>`.
Optional. Configures the type of sentiment analysis to perform. If not provided, sentiment analysis is not performed. Note: Sentiment Analysis is only currently available for Enterprise Edition agents.
Represents the result of conversational query or event processing.
Used in:
, ,The original conversational query text: - If natural language text was provided as input, `query_text` contains a copy of the input. - If natural language speech audio was provided as input, `query_text` contains the speech recognition result. If speech recognizer produced multiple alternatives, a particular one is picked. - If automatic spell correction is enabled, `query_text` will contain the corrected user input.
The language that was triggered during intent detection. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
The Speech recognition confidence between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is not guaranteed to be accurate or set. In particular this field isn't set for StreamingDetectIntent since the streaming endpoint has separate confidence estimates per portion of the audio in StreamingRecognitionResult.
The action name from the matched intent.
The collection of extracted parameters.
This field is set to: - `false` if the matched intent has required parameters and not all of the required parameter values have been collected. - `true` if all required parameter values have been collected, or if the matched intent doesn't contain any required parameters.
The text to be pronounced to the user or shown on the screen. Note: This is a legacy field, `fulfillment_messages` should be preferred.
The collection of rich messages to present to the user.
If the query was fulfilled by a webhook call, this field is set to the value of the `source` field returned in the webhook response.
If the query was fulfilled by a webhook call, this field is set to the value of the `payload` field returned in the webhook response.
The collection of output contexts. If applicable, `output_contexts.parameters` contains entries with name `<parameter name>.original` containing the original parameter values before the query.
The intent that matched the conversational query. Some, not all fields are filled in this message, including but not limited to: `name`, `display_name`, `end_interaction` and `is_fallback`.
The intent detection confidence. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation. If there are `multiple knowledge_answers` messages, this value is set to the greatest `knowledgeAnswers.match_confidence` value in the list.
The free-form diagnostic info. For example, this field could contain webhook call latency. The string keys of the Struct's fields map can change without notice.
The sentiment analysis result, which depends on the `sentiment_analysis_request_config` specified in the request.
The result from Knowledge Connector (if any), ordered by decreasing `KnowledgeAnswers.match_confidence`.
The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.
Used in:
Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment).
A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).
Configures the types of sentiment analysis to perform.
Used in:
Optional. Instructs the service to perform sentiment analysis on `query_text`. If not provided, sentiment analysis is not performed on `query_text`.
The result of sentiment analysis as configured by `sentiment_analysis_request_config`.
Used in:
The sentiment analysis result for `query_text`.
Represents a session entity type. Extends or replaces a developer entity type at the user session level (we refer to the entity types defined at the agent level as "developer entity types"). Note: session entity types apply to all queries, regardless of the language.
Used as response type in: SessionEntityTypes.CreateSessionEntityType, SessionEntityTypes.GetSessionEntityType, SessionEntityTypes.UpdateSessionEntityType
Used as field type in:
, , , ,Required. The unique identifier of this session entity type. Format: `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>`, or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>/entityTypes/<Entity Type Display Name>`. If `Environment ID` is not specified, we assume default 'draft' environment. If `User ID` is not specified, we assume default '-' user. `<Entity Type Display Name>` must be the display name of an existing entity type in the same agent that will be overridden or supplemented.
Required. Indicates whether the additional data should override or supplement the developer entity type definition.
Required. The collection of entities associated with this session entity type.
The types of modifications for a session entity type.
Used in:
Not specified. This value should be never used.
The collection of session entities overrides the collection of entities in the corresponding developer entity type.
The collection of session entities extends the collection of entities in the corresponding developer entity type. Note: Even in this override mode calls to `ListSessionEntityTypes`, `GetSessionEntityType`, `CreateSessionEntityType` and `UpdateSessionEntityType` only return the additional entities added in this session entity type. If you want to get the supplemented list, please call [EntityTypes.GetEntityType][google.cloud.dialogflow.v2beta1.EntityTypes.GetEntityType] on the developer entity type and merge.
Hints for the speech recognizer to help with recognition in a specific conversation state.
Used in:
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. This list can be used to: * improve accuracy for words and phrases you expect the user to say, e.g. typical commands for your Dialogflow agent * add additional words to the speech recognizer vocabulary * ... See the [Cloud Speech documentation](https://cloud.google.com/speech-to-text/quotas) for usage limits.
Optional. Boost for this context compared to other contexts: * If the boost is positive, Dialogflow will increase the probability that the phrases in this context are recognized over similar sounding phrases. * If the boost is unspecified or non-positive, Dialogflow will not apply any boost. Dialogflow recommends that you use boosts in the range (0, 20] and that you find a value that fits your use case with binary search.
Variant of the specified [Speech model][google.cloud.dialogflow.v2beta1.InputAudioConfig.model] to use. See the [Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/enhanced-models) for which models have different variants. For example, the "phone_call" model has both a standard and an enhanced variant. When you use an enhanced model, you will generally receive higher quality results than for a standard model.
Used in:
No model variant specified. In this case Dialogflow defaults to USE_BEST_AVAILABLE.
Use the best available variant of the [Speech model][InputAudioConfig.model] that the caller is eligible for. Please see the [Dialogflow docs](https://cloud.google.com/dialogflow/docs/data-logging) for how to make your project eligible for enhanced models.
Use standard model variant even if an enhanced model is available. See the [Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/enhanced-models) for details about enhanced models.
Use an enhanced model variant: * If an enhanced variant does not exist for the given [model][google.cloud.dialogflow.v2beta1.InputAudioConfig.model] and request language, Dialogflow falls back to the standard variant. The [Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/enhanced-models) describes which models have enhanced variants. * If the API caller isn't eligible for enhanced models, Dialogflow returns an error. Please see the [Dialogflow docs](https://cloud.google.com/dialogflow/docs/data-logging) for how to make your project eligible.
Information for a word recognized by the speech recognizer.
Used in:
The word this info is for.
Time offset relative to the beginning of the audio that corresponds to the start of the spoken word. This is an experimental feature and the accuracy of the time offset can vary.
Time offset relative to the beginning of the audio that corresponds to the end of the spoken word. This is an experimental feature and the accuracy of the time offset can vary.
The Speech confidence between 0.0 and 1.0 for this word. A higher number indicates an estimated greater likelihood that the recognized word is correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is not guaranteed to be fully stable over time for the same audio input. Users should also not rely on it to always be provided.
Gender of the voice as described in [SSML voice element](https://www.w3.org/TR/speech-synthesis11/#edef_voice).
Used in:
An unspecified gender, which means that the client doesn't care which gender the selected voice will have.
A male voice.
A female voice.
A gender-neutral voice.
Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance. Example: 1. transcript: "tube" 2. transcript: "to be a" 3. transcript: "to be" 4. transcript: "to be or not to be" is_final: true 5. transcript: " that's" 6. transcript: " that is" 7. message_type: `END_OF_SINGLE_UTTERANCE` 8. transcript: " that is the question" is_final: true Only two of the responses contain final results (#4 and #8 indicated by `is_final: true`). Concatenating these generates the full transcript: "to be or not to be that is the question". In each response we populate: * for `TRANSCRIPT`: `transcript` and possibly `is_final`. * for `END_OF_SINGLE_UTTERANCE`: only `message_type`.
Used in:
Type of the result message.
Transcript text representing the words that the user spoke. Populated if and only if `message_type` = `TRANSCRIPT`.
If `false`, the `StreamingRecognitionResult` represents an interim result that may change. If `true`, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for `message_type` = `TRANSCRIPT`.
The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is typically only provided if `is_final` is true and you should not rely on it being accurate or even set.
An estimate of the likelihood that the speech recognizer will not change its guess about this interim recognition result: * If the value is unspecified or 0.0, Dialogflow didn't compute the stability. In particular, Dialogflow will only provide stability for `TRANSCRIPT` results with `is_final = false`. * Otherwise, the value is in (0.0, 1.0] where 0.0 means completely unstable and 1.0 means completely stable.
Word-specific information for the words recognized by Speech in [transcript][google.cloud.dialogflow.v2beta1.StreamingRecognitionResult.transcript]. Populated if and only if `message_type` = `TRANSCRIPT` and [InputAudioConfig.enable_word_info] is set.
Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for `message_type` = `TRANSCRIPT`.
Type of the response message.
Used in:
Not specified. Should never be used.
Message contains a (possibly partial) transcript.
Event indicates that the server has detected the end of the user's speech utterance and expects no additional speech. Therefore, the server will not process additional audio (although it may subsequently return additional results). The client should stop sending additional audio data, half-close the gRPC connection, and wait for any additional results until the server closes the gRPC connection. This message is only sent if `single_utterance` was set to `true`, and is not used otherwise.
Configuration of how speech should be synthesized.
Used in:
Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal native speed supported by the specific voice. 2.0 is twice as fast, and 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any other values < 0.25 or > 4.0 will return an error.
Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch.
Optional. Volume gain (in dB) of the normal native volume supported by the specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) will play at approximately half the amplitude of the normal native signal amplitude. A value of +6.0 (dB) will play at approximately twice the amplitude of the normal native signal amplitude. We strongly recommend not to exceed +10 (dB) as there's usually no effective increase in loudness for any value greater than that.
Optional. An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given.
Optional. The desired voice of the synthesized audio.
Represents the natural language text to be processed.
Used in:
Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters.
Required. The language of this conversational query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
Represents a single validation error.
Used in:
The severity of the error.
The names of the entries that the error is associated with. Format: - "projects/<Project ID>/agent", if the error is associated with the entire agent. - "projects/<Project ID>/agent/intents/<Intent ID>", if the error is associated with certain intents. - "projects/<Project ID>/agent/intents/<Intent Id>/trainingPhrases/<Training Phrase ID>", if the error is associated with certain intent training phrases. - "projects/<Project ID>/agent/intents/<Intent Id>/parameters/<Parameter ID>", if the error is associated with certain intent parameters. - "projects/<Project ID>/agent/entities/<Entity ID>", if the error is associated with certain entities.
The detailed error messsage.
Represents a level of severity.
Used in:
Not specified. This value should never be used.
The agent doesn't follow Dialogflow best practicies.
The agent may not behave as expected.
The agent may experience partial failures.
The agent may completely fail.
Description of which voice to use for speech synthesis.
Used in:
Optional. The name of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and gender.
Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request.
The request message for a webhook call.
The unique identifier of detectIntent request session. Can be used to identify end-user inside webhook implementation. Format: `projects/<Project ID>/agent/sessions/<Session ID>`, or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>`.
The unique identifier of the response. Contains the same value as `[Streaming]DetectIntentResponse.response_id`.
The result of the conversational query or event processing. Contains the same value as `[Streaming]DetectIntentResponse.query_result`.
Alternative query results from KnowledgeService.
Optional. The contents of the original request that was passed to `[Streaming]DetectIntent` call.
The response message for a webhook call.
Optional. The text to be shown on the screen. This value is passed directly to `QueryResult.fulfillment_text`.
Optional. The collection of rich messages to present to the user. This value is passed directly to `QueryResult.fulfillment_messages`.
Optional. This value is passed directly to `QueryResult.webhook_source`.
Optional. This value is passed directly to `QueryResult.webhook_payload`. See the related `fulfillment_messages[i].payload field`, which may be used as an alternative to this field. This field can be used for Actions on Google responses. It should have a structure similar to the JSON message shown here. For more information, see [Actions on Google Webhook Format](https://developers.google.com/actions/dialogflow/webhook) <pre>{ "google": { "expectUserResponse": true, "richResponse": { "items": [ { "simpleResponse": { "textToSpeech": "this is a simple response" } } ] } } }</pre>
Optional. The collection of output contexts. This value is passed directly to `QueryResult.output_contexts`.
Optional. Makes the platform immediately invoke another `DetectIntent` call internally with the specified event as input. When this field is set, Dialogflow ignores the `fulfillment_text`, `fulfillment_messages`, and `payload` fields.
Optional. Indicates that this intent ends an interaction. Some integrations (e.g., Actions on Google or Dialogflow phone gateway) use this information to close interaction with an end user. Default is false.
Optional. Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query. Setting the session entity types inside webhook overwrites the session entity types that have been set through `DetectIntentRequest.query_params.session_entity_types`.