Get desktop application:
View/edit binary Protocol Buffers messages
Agents are best described as Natural Language Understanding (NLU) modules that transform user requests into actionable data. You can include agents in your app, product, or service to determine user intent and respond to the user in a natural way. After you create an agent, you can add [Intents][google.cloud.dialogflow.v2.Intents], [Contexts][google.cloud.dialogflow.v2.Contexts], [Entity Types][google.cloud.dialogflow.v2.EntityTypes], [Webhooks][google.cloud.dialogflow.v2.WebhookRequest], and so on to manage the flow of a conversation and match user input to predefined intents and actions. You can create an agent using both Dialogflow Standard Edition and Dialogflow Enterprise Edition. For details, see [Dialogflow Editions](https://cloud.google.com/dialogflow/docs/editions). You can save your agent for backup or versioning by exporting the agent by using the [ExportAgent][google.cloud.dialogflow.v2.Agents.ExportAgent] method. You can import a saved agent by using the [ImportAgent][google.cloud.dialogflow.v2.Agents.ImportAgent] method. Dialogflow provides several [prebuilt agents](https://cloud.google.com/dialogflow/docs/agents-prebuilt) for common conversation scenarios such as determining a date and time, converting currency, and so on. For more information about agents, see the [Dialogflow documentation](https://cloud.google.com/dialogflow/docs/agents-overview).
Retrieves the specified agent.
The request message for [Agents.GetAgent][google.cloud.dialogflow.v2.Agents.GetAgent].
Required. The project that the agent to fetch is associated with. Format: `projects/<Project ID>`.
Creates/updates the specified agent.
The request message for [Agents.SetAgent][google.cloud.dialogflow.v2.Agents.SetAgent].
Required. The agent to update.
Optional. The mask to control which fields get updated.
Deletes the specified agent.
The request message for [Agents.DeleteAgent][google.cloud.dialogflow.v2.Agents.DeleteAgent].
Required. The project that the agent to delete is associated with. Format: `projects/<Project ID>`.
Returns the list of agents. Since there is at most one conversational agent per project, this method is useful primarily for listing all agents across projects the caller has access to. One can achieve that with a wildcard project collection id "-". Refer to [List Sub-Collections](https://cloud.google.com/apis/design/design_patterns#list_sub-collections).
The request message for [Agents.SearchAgents][google.cloud.dialogflow.v2.Agents.SearchAgents].
Required. The project to list agents from. Format: `projects/<Project ID or '-'>`.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
The next_page_token value returned from a previous list request.
The response message for [Agents.SearchAgents][google.cloud.dialogflow.v2.Agents.SearchAgents].
The list of agents. There will be a maximum number of items returned based on the page_size field in the request.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Trains the specified agent. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [Agents.TrainAgent][google.cloud.dialogflow.v2.Agents.TrainAgent].
Required. The project that the agent to train is associated with. Format: `projects/<Project ID>`.
Exports the specified agent to a ZIP file. Operation <response: [ExportAgentResponse][google.cloud.dialogflow.v2.ExportAgentResponse]>
The request message for [Agents.ExportAgent][google.cloud.dialogflow.v2.Agents.ExportAgent].
Required. The project that the agent to export is associated with. Format: `projects/<Project ID>`.
Required. The [Google Cloud Storage](https://cloud.google.com/storage/docs/) URI to export the agent to. The format of this URI must be `gs://<bucket-name>/<object-name>`. If left unspecified, the serialized agent is returned inline.
Imports the specified agent from a ZIP file. Uploads new intents and entity types without deleting the existing ones. Intents and entity types with the same name are replaced with the new versions from ImportAgentRequest. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [Agents.ImportAgent][google.cloud.dialogflow.v2.Agents.ImportAgent].
Required. The project that the agent to import is associated with. Format: `projects/<Project ID>`.
Required. The agent to import.
The URI to a Google Cloud Storage file containing the agent to import. Note: The URI must start with "gs://".
Zip compressed raw byte content for agent.
Restores the specified agent from a ZIP file. Replaces the current agent version with a new one. All the intents and entity types in the older version are deleted. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [Agents.RestoreAgent][google.cloud.dialogflow.v2.Agents.RestoreAgent].
Required. The project that the agent to restore is associated with. Format: `projects/<Project ID>`.
Required. The agent to restore.
The URI to a Google Cloud Storage file containing the agent to restore. Note: The URI must start with "gs://".
Zip compressed raw byte content for agent.
A context represents additional information included with user input or with an intent returned by the Dialogflow API. Contexts are helpful for differentiating user input which may be vague or have a different meaning depending on additional details from your application such as user setting and preferences, previous user input, where the user is in your application, geographic location, and so on. You can include contexts as input parameters of a [DetectIntent][google.cloud.dialogflow.v2.Sessions.DetectIntent] (or [StreamingDetectIntent][google.cloud.dialogflow.v2.Sessions.StreamingDetectIntent]) request, or as output contexts included in the returned intent. Contexts expire when an intent is matched, after the number of `DetectIntent` requests specified by the `lifespan_count` parameter, or after 20 minutes if no intents are matched for a `DetectIntent` request. For more information about contexts, see the [Dialogflow documentation](https://cloud.google.com/dialogflow/docs/contexts-overview).
Returns the list of all contexts in the specified session.
The request message for [Contexts.ListContexts][google.cloud.dialogflow.v2.Contexts.ListContexts].
Required. The session to list all contexts from. Format: `projects/<Project ID>/agent/sessions/<Session ID>`.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
Optional. The next_page_token value returned from a previous list request.
The response message for [Contexts.ListContexts][google.cloud.dialogflow.v2.Contexts.ListContexts].
The list of contexts. There will be a maximum number of items returned based on the page_size field in the request.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Retrieves the specified context.
The request message for [Contexts.GetContext][google.cloud.dialogflow.v2.Contexts.GetContext].
Required. The name of the context. Format: `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`.
Creates a context. If the specified context already exists, overrides the context.
The request message for [Contexts.CreateContext][google.cloud.dialogflow.v2.Contexts.CreateContext].
Required. The session to create a context for. Format: `projects/<Project ID>/agent/sessions/<Session ID>`.
Required. The context to create.
Updates the specified context.
The request message for [Contexts.UpdateContext][google.cloud.dialogflow.v2.Contexts.UpdateContext].
Required. The context to update.
Optional. The mask to control which fields get updated.
Deletes the specified context.
The request message for [Contexts.DeleteContext][google.cloud.dialogflow.v2.Contexts.DeleteContext].
Required. The name of the context to delete. Format: `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`.
Deletes all active contexts in the specified session.
The request message for [Contexts.DeleteAllContexts][google.cloud.dialogflow.v2.Contexts.DeleteAllContexts].
Required. The name of the session to delete all contexts from. Format: `projects/<Project ID>/agent/sessions/<Session ID>`.
Entities are extracted from user input and represent parameters that are meaningful to your application. For example, a date range, a proper name such as a geographic location or landmark, and so on. Entities represent actionable data for your application. When you define an entity, you can also include synonyms that all map to that entity. For example, "soft drink", "soda", "pop", and so on. There are three types of entities: * **System** - entities that are defined by the Dialogflow API for common data types such as date, time, currency, and so on. A system entity is represented by the `EntityType` type. * **Developer** - entities that are defined by you that represent actionable data that is meaningful to your application. For example, you could define a `pizza.sauce` entity for red or white pizza sauce, a `pizza.cheese` entity for the different types of cheese on a pizza, a `pizza.topping` entity for different toppings, and so on. A developer entity is represented by the `EntityType` type. * **User** - entities that are built for an individual user such as favorites, preferences, playlists, and so on. A user entity is represented by the [SessionEntityType][google.cloud.dialogflow.v2.SessionEntityType] type. For more information about entity types, see the [Dialogflow documentation](https://cloud.google.com/dialogflow/docs/entities-overview).
Returns the list of all entity types in the specified agent.
The request message for [EntityTypes.ListEntityTypes][google.cloud.dialogflow.v2.EntityTypes.ListEntityTypes].
Required. The agent to list all entity types from. Format: `projects/<Project ID>/agent`.
Optional. The language to list entity synonyms for. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
Optional. The next_page_token value returned from a previous list request.
The response message for [EntityTypes.ListEntityTypes][google.cloud.dialogflow.v2.EntityTypes.ListEntityTypes].
The list of agent entity types. There will be a maximum number of items returned based on the page_size field in the request.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Retrieves the specified entity type.
The request message for [EntityTypes.GetEntityType][google.cloud.dialogflow.v2.EntityTypes.GetEntityType].
Required. The name of the entity type. Format: `projects/<Project ID>/agent/entityTypes/<EntityType ID>`.
Optional. The language to retrieve entity synonyms for. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Creates an entity type in the specified agent.
The request message for [EntityTypes.CreateEntityType][google.cloud.dialogflow.v2.EntityTypes.CreateEntityType].
Required. The agent to create a entity type for. Format: `projects/<Project ID>/agent`.
Required. The entity type to create.
Optional. The language of entity synonyms defined in `entity_type`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Updates the specified entity type.
The request message for [EntityTypes.UpdateEntityType][google.cloud.dialogflow.v2.EntityTypes.UpdateEntityType].
Required. The entity type to update.
Optional. The language of entity synonyms defined in `entity_type`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The mask to control which fields get updated.
Deletes the specified entity type.
The request message for [EntityTypes.DeleteEntityType][google.cloud.dialogflow.v2.EntityTypes.DeleteEntityType].
Required. The name of the entity type to delete. Format: `projects/<Project ID>/agent/entityTypes/<EntityType ID>`.
Updates/Creates multiple entity types in the specified agent. Operation <response: [BatchUpdateEntityTypesResponse][google.cloud.dialogflow.v2.BatchUpdateEntityTypesResponse]>
The request message for [EntityTypes.BatchUpdateEntityTypes][google.cloud.dialogflow.v2.EntityTypes.BatchUpdateEntityTypes].
Required. The name of the agent to update or create entity types in. Format: `projects/<Project ID>/agent`.
The source of the entity type batch. For each entity type in the batch: * If `name` is specified, we update an existing entity type. * If `name` is not specified, we create a new entity type.
The URI to a Google Cloud Storage file containing entity types to update or create. The file format can either be a serialized proto (of EntityBatch type) or a JSON object. Note: The URI must start with "gs://".
The collection of entity types to update or create.
Optional. The language of entity synonyms defined in `entity_types`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The mask to control which fields get updated.
Deletes entity types in the specified agent. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [EntityTypes.BatchDeleteEntityTypes][google.cloud.dialogflow.v2.EntityTypes.BatchDeleteEntityTypes].
Required. The name of the agent to delete all entities types for. Format: `projects/<Project ID>/agent`.
Required. The names entity types to delete. All names must point to the same agent as `parent`.
Creates multiple new entities in the specified entity type. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [EntityTypes.BatchCreateEntities][google.cloud.dialogflow.v2.EntityTypes.BatchCreateEntities].
Required. The name of the entity type to create entities in. Format: `projects/<Project ID>/agent/entityTypes/<Entity Type ID>`.
Required. The entities to create.
Optional. The language of entity synonyms defined in `entities`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Updates or creates multiple entities in the specified entity type. This method does not affect entities in the entity type that aren't explicitly specified in the request. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [EntityTypes.BatchUpdateEntities][google.cloud.dialogflow.v2.EntityTypes.BatchUpdateEntities].
Required. The name of the entity type to update or create entities in. Format: `projects/<Project ID>/agent/entityTypes/<Entity Type ID>`.
Required. The entities to update or create.
Optional. The language of entity synonyms defined in `entities`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The mask to control which fields get updated.
Deletes entities in the specified entity type. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [EntityTypes.BatchDeleteEntities][google.cloud.dialogflow.v2.EntityTypes.BatchDeleteEntities].
Required. The name of the entity type to delete entries for. Format: `projects/<Project ID>/agent/entityTypes/<Entity Type ID>`.
Required. The canonical `values` of the entities to delete. Note that these are not fully-qualified names, i.e. they don't start with `projects/<Project ID>`.
Optional. The language of entity synonyms defined in `entities`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
An intent represents a mapping between input from a user and an action to be taken by your application. When you pass user input to the [DetectIntent][google.cloud.dialogflow.v2.Sessions.DetectIntent] (or [StreamingDetectIntent][google.cloud.dialogflow.v2.Sessions.StreamingDetectIntent]) method, the Dialogflow API analyzes the input and searches for a matching intent. If no match is found, the Dialogflow API returns a fallback intent (`is_fallback` = true). You can provide additional information for the Dialogflow API to use to match user input to an intent by adding the following to your intent. * **Contexts** - provide additional context for intent analysis. For example, if an intent is related to an object in your application that plays music, you can provide a context to determine when to match the intent if the user input is "turn it off". You can include a context that matches the intent when there is previous user input of "play music", and not when there is previous user input of "turn on the light". * **Events** - allow for matching an intent by using an event name instead of user input. Your application can provide an event name and related parameters to the Dialogflow API to match an intent. For example, when your application starts, you can send a welcome event with a user name parameter to the Dialogflow API to match an intent with a personalized welcome message for the user. * **Training phrases** - provide examples of user input to train the Dialogflow API agent to better match intents. For more information about intents, see the [Dialogflow documentation](https://cloud.google.com/dialogflow/docs/intents-overview).
Returns the list of all intents in the specified agent.
The request message for [Intents.ListIntents][google.cloud.dialogflow.v2.Intents.ListIntents].
Required. The agent to list all intents from. Format: `projects/<Project ID>/agent`.
Optional. The language to list training phrases, parameters and rich messages for. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The resource view to apply to the returned intent.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
Optional. The next_page_token value returned from a previous list request.
The response message for [Intents.ListIntents][google.cloud.dialogflow.v2.Intents.ListIntents].
The list of agent intents. There will be a maximum number of items returned based on the page_size field in the request.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Retrieves the specified intent.
The request message for [Intents.GetIntent][google.cloud.dialogflow.v2.Intents.GetIntent].
Required. The name of the intent. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Optional. The language to retrieve training phrases, parameters and rich messages for. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The resource view to apply to the returned intent.
Creates an intent in the specified agent.
The request message for [Intents.CreateIntent][google.cloud.dialogflow.v2.Intents.CreateIntent].
Required. The agent to create a intent for. Format: `projects/<Project ID>/agent`.
Required. The intent to create.
Optional. The language of training phrases, parameters and rich messages defined in `intent`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The resource view to apply to the returned intent.
Updates the specified intent.
The request message for [Intents.UpdateIntent][google.cloud.dialogflow.v2.Intents.UpdateIntent].
Required. The intent to update.
Optional. The language of training phrases, parameters and rich messages defined in `intent`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The mask to control which fields get updated.
Optional. The resource view to apply to the returned intent.
Deletes the specified intent and its direct or indirect followup intents.
The request message for [Intents.DeleteIntent][google.cloud.dialogflow.v2.Intents.DeleteIntent].
Required. The name of the intent to delete. If this intent has direct or indirect followup intents, we also delete them. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Updates/Creates multiple intents in the specified agent. Operation <response: [BatchUpdateIntentsResponse][google.cloud.dialogflow.v2.BatchUpdateIntentsResponse]>
The request message for [Intents.BatchUpdateIntents][google.cloud.dialogflow.v2.Intents.BatchUpdateIntents].
Required. The name of the agent to update or create intents in. Format: `projects/<Project ID>/agent`.
The source of the intent batch.
The URI to a Google Cloud Storage file containing intents to update or create. The file format can either be a serialized proto (of IntentBatch type) or JSON object. Note: The URI must start with "gs://".
The collection of intents to update or create.
Optional. The language of training phrases, parameters and rich messages defined in `intents`. If not specified, the agent's default language is used. [Many languages](https://cloud.google.com/dialogflow/docs/reference/language) are supported. Note: languages must be enabled in the agent before they can be used.
Optional. The mask to control which fields get updated.
Optional. The resource view to apply to the returned intent.
Deletes intents in the specified agent. Operation <response: [google.protobuf.Empty][google.protobuf.Empty]>
The request message for [Intents.BatchDeleteIntents][google.cloud.dialogflow.v2.Intents.BatchDeleteIntents].
Required. The name of the agent to delete all entities types for. Format: `projects/<Project ID>/agent`.
Required. The collection of intents to delete. Only intent `name` must be filled in.
Entities are extracted from user input and represent parameters that are meaningful to your application. For example, a date range, a proper name such as a geographic location or landmark, and so on. Entities represent actionable data for your application. Session entity types are referred to as **User** entity types and are entities that are built for an individual user such as favorites, preferences, playlists, and so on. You can redefine a session entity type at the session level. Session entity methods do not work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration. For more information about entity types, see the [Dialogflow documentation](https://cloud.google.com/dialogflow/docs/entities-overview).
Returns the list of all session entity types in the specified session. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
The request message for [SessionEntityTypes.ListSessionEntityTypes][google.cloud.dialogflow.v2.SessionEntityTypes.ListSessionEntityTypes].
Required. The session to list all session entity types from. Format: `projects/<Project ID>/agent/sessions/<Session ID>`.
Optional. The maximum number of items to return in a single page. By default 100 and at most 1000.
Optional. The next_page_token value returned from a previous list request.
The response message for [SessionEntityTypes.ListSessionEntityTypes][google.cloud.dialogflow.v2.SessionEntityTypes.ListSessionEntityTypes].
The list of session entity types. There will be a maximum number of items returned based on the page_size field in the request.
Token to retrieve the next page of results, or empty if there are no more results in the list.
Retrieves the specified session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
The request message for [SessionEntityTypes.GetSessionEntityType][google.cloud.dialogflow.v2.SessionEntityTypes.GetSessionEntityType].
Required. The name of the session entity type. Format: `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>`.
Creates a session entity type. If the specified session entity type already exists, overrides the session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
The request message for [SessionEntityTypes.CreateSessionEntityType][google.cloud.dialogflow.v2.SessionEntityTypes.CreateSessionEntityType].
Required. The session to create a session entity type for. Format: `projects/<Project ID>/agent/sessions/<Session ID>`.
Required. The session entity type to create.
Updates the specified session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
The request message for [SessionEntityTypes.UpdateSessionEntityType][google.cloud.dialogflow.v2.SessionEntityTypes.UpdateSessionEntityType].
Required. The entity type to update. Format: `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>`.
Optional. The mask to control which fields get updated.
Deletes the specified session entity type. This method doesn't work with Google Assistant integration. Contact Dialogflow support if you need to use session entities with Google Assistant integration.
The request message for [SessionEntityTypes.DeleteSessionEntityType][google.cloud.dialogflow.v2.SessionEntityTypes.DeleteSessionEntityType].
Required. The name of the entity type to delete. Format: `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>`.
A session represents an interaction with a user. You retrieve user input and pass it to the [DetectIntent][google.cloud.dialogflow.v2.Sessions.DetectIntent] (or [StreamingDetectIntent][google.cloud.dialogflow.v2.Sessions.StreamingDetectIntent]) method to determine user intent and respond.
Processes a natural language query and returns structured, actionable data as a result. This method is not idempotent, because it may cause contexts and session entity types to be updated, which in turn might affect results of future queries.
The request to detect user's intent.
Required. The name of the session this query is sent to. Format: `projects/<Project ID>/agent/sessions/<Session ID>`. It's up to the API caller to choose an appropriate session ID. It can be a random number or some type of user identifier (preferably hashed). The length of the session ID must not exceed 36 bytes.
Optional. The parameters of this query.
Required. The input specification. It can be set to: 1. an audio config which instructs the speech recognizer how to process the speech audio, 2. a conversational query in the form of text, or 3. an event that specifies which intent to trigger.
Optional. Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
Optional. The natural language speech audio to be processed. This field should be populated iff `query_input` is set to an input audio config. A single request can contain up to 1 minute of speech audio data.
The message returned from the DetectIntent method.
The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.
The selected results of the conversational query or event processing. See `alternative_query_results` for additional potential results.
Specifies the status of the webhook request.
The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the `query_result.fulfillment_messages` field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty.
The config used by the speech synthesizer to generate the output audio.
Processes a natural language query in audio format in a streaming fashion and returns structured, actionable data as a result. This method is only available via the gRPC API (not REST).
The top-level message sent by the client to the [StreamingDetectIntent][] method. Multiple request messages should be sent in order: 1. The first message must contain [StreamingDetectIntentRequest.session][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.session], [StreamingDetectIntentRequest.query_input] plus optionally [StreamingDetectIntentRequest.query_params]. If the client wants to receive an audio response, it should also contain [StreamingDetectIntentRequest.output_audio_config][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.output_audio_config]. The message must not contain [StreamingDetectIntentRequest.input_audio][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.input_audio]. 2. If [StreamingDetectIntentRequest.query_input][google.cloud.dialogflow.v2.StreamingDetectIntentRequest.query_input] was set to [StreamingDetectIntentRequest.query_input.audio_config][], all subsequent messages must contain [StreamingDetectIntentRequest.input_audio] to continue with Speech recognition. If you decide to rather detect an intent from text input after you already started Speech recognition, please send a message with [StreamingDetectIntentRequest.query_input.text][]. However, note that: * Dialogflow will bill you for the audio duration so far. * Dialogflow discards all Speech recognition results in favor of the input text. * Dialogflow will use the language code from the first message. After you sent all input, you must half-close or abort the request stream.
Required. The name of the session the query is sent to. Format of the session name: `projects/<Project ID>/agent/sessions/<Session ID>`. It's up to the API caller to choose an appropriate `Session ID`. It can be a random number or some type of user identifier (preferably hashed). The length of the session ID must not exceed 36 characters.
Optional. The parameters of this query.
Required. The input specification. It can be set to: 1. an audio config which instructs the speech recognizer how to process the speech audio, 2. a conversational query in the form of text, or 3. an event that specifies which intent to trigger.
Optional. Please use [InputAudioConfig.single_utterance][google.cloud.dialogflow.v2.InputAudioConfig.single_utterance] instead. If `false` (default), recognition does not cease until the client closes the stream. If `true`, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. This setting is ignored when `query_input` is a piece of text or an event.
Optional. Instructs the speech synthesizer how to generate the output audio. If this field is not set and agent-level speech synthesizer is not configured, no output audio is generated.
Optional. The input audio content to be recognized. Must be sent if `query_input` was set to a streaming input audio config. The complete audio over all streaming messages must not exceed 1 minute.
The top-level message returned from the `StreamingDetectIntent` method. Multiple response messages can be returned in order: 1. If the input was set to streaming audio, the first one or more messages contain `recognition_result`. Each `recognition_result` represents a more complete transcript of what the user said. The last `recognition_result` has `is_final` set to `true`. 2. The next message contains `response_id`, `query_result` and optionally `webhook_status` if a WebHook was called.
The unique identifier of the response. It can be used to locate a response in the training example set or for reporting issues.
The result of speech recognition.
The result of the conversational query or event processing.
Specifies the status of the webhook request.
The audio data bytes encoded as specified in the request. Note: The output audio is generated based on the values of default platform text responses found in the `query_result.fulfillment_messages` field. If multiple default text responses exist, they will be concatenated when generating audio. If no default platform text responses exist, the generated audio content will be empty.
The config used by the speech synthesizer to generate the output audio.
Represents a conversational agent.
Used as response type in: Agents.GetAgent, Agents.SetAgent
Used as field type in:
,Required. The project of this agent. Format: `projects/<Project ID>`.
Required. The name of this agent.
Required. The default language of the agent as a language tag. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. This field cannot be set by the `Update` method.
Optional. The list of all languages supported by this agent (except for the `default_language_code`).
Required. The time zone of this agent from the [time zone database](https://www.iana.org/time-zones), e.g., America/New_York, Europe/Paris.
Optional. The description of this agent. The maximum length is 500 characters. If exceeded, the request is rejected.
Optional. The URI of the agent's avatar. Avatars are used throughout the Dialogflow console and in the self-hosted [Web Demo](https://cloud.google.com/dialogflow/docs/integrations/web-demo) integration.
Optional. Determines whether this agent should log conversation queries.
Optional. Determines how intents are detected from user queries.
Optional. To filter out false positive results and still get variety in matched natural language inputs for your agent, you can tune the machine learning classification threshold. If the returned score value is less than the threshold value, then a fallback intent will be triggered or, if there are no fallback intents defined, no intent will be triggered. The score values range from 0.0 (completely uncertain) to 1.0 (completely certain). If set to 0.0, the default of 0.3 is used.
Optional. API version displayed in Dialogflow console. If not specified, V2 API is assumed. Clients are free to query different service endpoints for different API versions. However, bots connectors and webhook calls will follow the specified API version.
Optional. The agent tier. If not specified, TIER_STANDARD is assumed.
API version for the agent.
Used in:
Not specified.
Legacy V1 API.
V2 API.
V2beta1 API.
Match mode determines how intents are detected from user queries.
Used in:
Not specified.
Best for agents with a small number of examples in intents and/or wide use of templates syntax and composite entities.
Can be used for agents with a large number of examples in intents, especially the ones using @sys.any or very large developer entities.
Represents the agent tier.
Used in:
Not specified. This value should never be used.
Standard tier.
Enterprise tier (Essentials).
Enterprise tier (Plus).
Audio encoding of the audio content sent in the conversational query request. Refer to the [Cloud Speech API documentation](https://cloud.google.com/speech-to-text/docs/basics) for more details.
Used in:
Not specified.
Uncompressed 16-bit signed little-endian samples (Linear PCM).
[`FLAC`](https://xiph.org/flac/documentation.html) (Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth of `LINEAR16`. `FLAC` stream encoding supports 16-bit and 24-bit samples, however, not all fields in `STREAMINFO` are supported.
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law.
Adaptive Multi-Rate Narrowband codec. `sample_rate_hertz` must be 8000.
Adaptive Multi-Rate Wideband codec. `sample_rate_hertz` must be 16000.
Opus encoded audio frames in Ogg container ([OggOpus](https://wiki.xiph.org/OggOpus)). `sample_rate_hertz` must be 16000.
Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, `OGG_OPUS` is highly preferred over Speex encoding. The [Speex](https://speex.org/) encoding supported by Dialogflow API has a header byte in each block, as in MIME type `audio/x-speex-with-header-byte`. It is a variant of the RTP Speex encoding defined in [RFC 5574](https://tools.ietf.org/html/rfc5574). The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. `sample_rate_hertz` must be 16000.
The response message for [EntityTypes.BatchUpdateEntityTypes][google.cloud.dialogflow.v2.EntityTypes.BatchUpdateEntityTypes].
The collection of updated or created entity types.
The response message for [Intents.BatchUpdateIntents][google.cloud.dialogflow.v2.Intents.BatchUpdateIntents].
The collection of updated or created intents.
Represents a context.
Used as response type in: Contexts.CreateContext, Contexts.GetContext, Contexts.UpdateContext
Used as field type in:
, , , , , ,Required. The unique identifier of the context. Format: `projects/<Project ID>/agent/sessions/<Session ID>/contexts/<Context ID>`. The `Context ID` is always converted to lowercase, may only contain characters in [a-zA-Z0-9_-%] and may be at most 250 bytes long.
Optional. The number of conversational query requests after which the context expires. If set to `0` (the default) the context expires immediately. Contexts expire automatically after 20 minutes if there are no matching queries.
Optional. The collection of parameters associated with this context. Refer to [this doc](https://cloud.google.com/dialogflow/docs/intents-actions-parameters) for syntax.
Represents an entity type. Entity types serve as a tool for extracting parameter values from natural language queries.
Used as response type in: EntityTypes.CreateEntityType, EntityTypes.GetEntityType, EntityTypes.UpdateEntityType
Used as field type in:
, , , ,The unique identifier of the entity type. Required for [EntityTypes.UpdateEntityType][google.cloud.dialogflow.v2.EntityTypes.UpdateEntityType] and [EntityTypes.BatchUpdateEntityTypes][google.cloud.dialogflow.v2.EntityTypes.BatchUpdateEntityTypes] methods. Format: `projects/<Project ID>/agent/entityTypes/<Entity Type ID>`.
Required. The name of the entity type.
Required. Indicates the kind of entity type.
Optional. Indicates whether the entity type can be automatically expanded.
Optional. The collection of entity entries associated with the entity type.
Optional. Enables fuzzy entity extraction during classification.
Represents different entity type expansion modes. Automated expansion allows an agent to recognize values that have not been explicitly listed in the entity (for example, new kinds of shopping list items).
Used in:
Auto expansion disabled for the entity.
Allows an agent to recognize values that have not been explicitly listed in the entity.
An **entity entry** for an associated entity type.
Used in:
, , ,Required. The primary value associated with this entity entry. For example, if the entity type is *vegetable*, the value could be *scallions*. For `KIND_MAP` entity types: * A canonical value to be used in place of synonyms. For `KIND_LIST` entity types: * A string that can contain references to other entity types (with or without aliases).
Required. A collection of value synonyms. For example, if the entity type is *vegetable*, and `value` is *scallions*, a synonym could be *green onions*. For `KIND_LIST` entity types: * This collection must contain exactly one synonym equal to `value`.
Represents kinds of entities.
Used in:
Not specified. This value should be never used.
Map entity types allow mapping of a group of synonyms to a canonical value.
List entity types contain a set of entries that do not map to canonical values. However, list entity types can contain references to other entity types (with or without aliases).
Regexp entity types allow to specify regular expressions in entries values.
This message is a wrapper around a collection of entity types.
Used in:
A collection of entity types.
Events allow for matching intents by event name instead of the natural language input. For instance, input `<event: { name: "welcome_event", parameters: { name: "Sam" } }>` can trigger a personalized welcome response. The parameter `name` may be used by the agent in the response: `"Hello #welcome_event.name! What can I do for you today?"`.
Used in:
,Required. The unique identifier of the event.
Optional. The collection of parameters associated with the event.
Required. The language of this query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
The response message for [Agents.ExportAgent][google.cloud.dialogflow.v2.Agents.ExportAgent].
The exported agent.
The URI to a file containing the exported agent. This field is populated only if `agent_uri` is specified in `ExportAgentRequest`.
Zip compressed raw byte content for agent.
Instructs the speech recognizer how to process the audio content.
Used in:
Required. Audio encoding of the audio content to process.
Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to [Cloud Speech API documentation](https://cloud.google.com/speech-to-text/docs/basics) for more details.
Required. The language of the supplied audio. Dialogflow does not do translations. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See [the Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints) for more details.
Optional. Which variant of the [Speech model][google.cloud.dialogflow.v2.InputAudioConfig.model] to use.
Optional. If `false` (default), recognition does not cease until the client closes the stream. If `true`, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.
Represents an intent. Intents convert a number of user expressions or patterns into an action. An action is an extraction of a user command or sentence semantics.
Used as response type in: Intents.CreateIntent, Intents.GetIntent, Intents.UpdateIntent
Used as field type in:
, , , , , ,The unique identifier of this intent. Required for [Intents.UpdateIntent][google.cloud.dialogflow.v2.Intents.UpdateIntent] and [Intents.BatchUpdateIntents][google.cloud.dialogflow.v2.Intents.BatchUpdateIntents] methods. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Required. The name of this intent.
Optional. Indicates whether webhooks are enabled for the intent.
Optional. The priority of this intent. Higher numbers represent higher priorities. If this is zero or unspecified, we use the default priority 500000. Negative numbers mean that the intent is disabled.
Optional. Indicates whether this is a fallback intent.
Optional. Indicates whether Machine Learning is disabled for the intent. Note: If `ml_diabled` setting is set to true, then this intent is not taken into account during inference in `ML ONLY` match mode. Also, auto-markup in the UI is turned off.
Optional. The list of context names required for this intent to be triggered. Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
Optional. The collection of event names that trigger the intent. If the collection of input contexts is not empty, all of the contexts must be present in the active user session for an event to trigger this intent.
Optional. The collection of examples that the agent is trained on.
Optional. The name of the action associated with the intent. Note: The action name must not contain whitespaces.
Optional. The collection of contexts that are activated when the intent is matched. Context messages in this collection should not set the parameters field. Setting the `lifespan_count` to 0 will reset the context when the intent is matched. Format: `projects/<Project ID>/agent/sessions/-/contexts/<Context ID>`.
Optional. Indicates whether to delete all contexts in the current session when this intent is matched.
Optional. The collection of parameters associated with the intent.
Optional. The collection of rich messages corresponding to the `Response` field in the Dialogflow console.
Optional. The list of platforms for which the first responses will be copied from the messages in PLATFORM_UNSPECIFIED (i.e. default platform).
Read-only. The unique identifier of the root intent in the chain of followup intents. It identifies the correct followup intents chain for this intent. We populate this field only in the output. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Read-only after creation. The unique identifier of the parent intent in the chain of followup intents. You can set this field when creating an intent, for example with [CreateIntent][] or [BatchUpdateIntents][], in order to make this intent a followup intent. It identifies the parent followup intent. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Read-only. Information about all followup intents that have this intent as a direct or indirect parent. We populate this field only in the output.
Represents a single followup intent in the chain.
Used in:
The unique identifier of the followup intent. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
The unique identifier of the followup intent's parent. Format: `projects/<Project ID>/agent/intents/<Intent ID>`.
Corresponds to the `Response` field in the Dialogflow console.
Used in:
, ,Required. The rich response message.
The text response.
The image response.
The quick replies response.
The card response.
Returns a response containing a custom, platform-specific payload. See the Intent.Message.Platform type for a description of the structure that may be required for your platform.
The voice and text-only responses for Actions on Google.
The basic card response for Actions on Google.
The suggestion chips for Actions on Google.
The link out suggestion chip for Actions on Google.
The list card response for Actions on Google.
The carousel card response for Actions on Google.
Optional. The platform that this message is intended for.
The basic card message. Useful for displaying information.
Used in:
Optional. The title of the card.
Optional. The subtitle of the card.
Required, unless image is present. The body text of the card.
Optional. The image for the card.
Optional. The collection of card buttons.
The button object that appears at the bottom of a card.
Used in:
Required. The title of the button.
Required. Action to take when a user taps on the button.
Opens the given URI.
Used in:
Required. The HTTP or HTTPS scheme URI.
The card response message.
Used in:
Optional. The title of the card.
Optional. The subtitle of the card.
Optional. The public URI to an image file for the card.
Optional. The collection of card buttons.
Contains information about a button.
Used in:
Optional. The text to show on the button.
Optional. The text to send back to the Dialogflow API or a URI to open.
The card for presenting a carousel of options to select from.
Used in:
Required. Carousel items.
An item in the carousel.
Used in:
Required. Additional info about the option item.
Required. Title of the carousel item.
Optional. The body text of the card.
Optional. The image to display.
The image response message.
Used in:
, , ,Optional. The public URI to an image file.
Optional. A text description of the image to be used for accessibility, e.g., screen readers.
The suggestion chip message that allows the user to jump out to the app or website associated with this agent.
Used in:
Required. The name of the app or site this chip is linking to.
Required. The URI of the app or site to open when the user taps the suggestion chip.
The card for presenting a list of options to select from.
Used in:
Optional. The overall title of the list.
Required. List items.
An item in the list.
Used in:
Required. Additional information about this option.
Required. The title of the list item.
Optional. The main text describing the item.
Optional. The image to display.
Represents different platforms that a rich message can be intended for.
Used in:
,Not specified.
Facebook.
Slack.
Telegram.
Kik.
Skype.
Line.
Viber.
Actions on Google. When using Actions on Google, you can choose one of the specific Intent.Message types that mention support for Actions on Google, or you can use the advanced Intent.Message.payload field. The payload field provides access to AoG features not available in the specific message types. If using the Intent.Message.payload field, it should have a structure similar to the JSON message shown here. For more information, see [Actions on Google Webhook Format](https://developers.google.com/actions/dialogflow/webhook) <pre>{ "expectUserResponse": true, "isSsml": false, "noInputPrompts": [], "richResponse": { "items": [ { "simpleResponse": { "displayText": "hi", "textToSpeech": "hello" } } ], "suggestions": [ { "title": "Say this" }, { "title": "or this" } ] }, "systemIntent": { "data": { "@type": "type.googleapis.com/google.actions.v2.OptionValueSpec", "listSelect": { "items": [ { "optionInfo": { "key": "key1", "synonyms": [ "key one" ] }, "title": "must not be empty, but unique" }, { "optionInfo": { "key": "key2", "synonyms": [ "key two" ] }, "title": "must not be empty, but unique" } ] } }, "intent": "actions.intent.OPTION" } }</pre>
Google Hangouts.
The quick replies response message.
Used in:
Optional. The title of the collection of quick replies.
Optional. The collection of quick replies.
Additional info about the select item for when it is triggered in a dialog.
Used in:
,Required. A unique key that will be sent back to the agent if this response is given.
Optional. A list of synonyms that can also be used to trigger this item in dialog.
The simple response message containing speech or text.
Used in:
One of text_to_speech or ssml must be provided. The plain text of the speech output. Mutually exclusive with ssml.
One of text_to_speech or ssml must be provided. Structured spoken response to the user in the SSML format. Mutually exclusive with text_to_speech.
Optional. The text to display.
The collection of simple response candidates. This message in `QueryResult.fulfillment_messages` and `WebhookResponse.fulfillment_messages` should contain only one `SimpleResponse`.
Used in:
Required. The list of simple responses.
The suggestion chip message that the user can tap to quickly post a reply to the conversation.
Used in:
Required. The text shown the in the suggestion chip.
The collection of suggestions.
Used in:
Required. The list of suggested replies.
The text response message.
Used in:
Optional. The collection of the agent's responses.
Represents intent parameters.
Used in:
The unique identifier of this parameter.
Required. The name of the parameter.
Optional. The definition of the parameter value. It can be: - a constant string, - a parameter value defined as `$parameter_name`, - an original parameter value defined as `$parameter_name.original`, - a parameter value from some context defined as `#context_name.parameter_name`.
Optional. The default value to use when the `value` yields an empty result. Default values can be extracted from contexts by using the following syntax: `#context_name.parameter_name`.
Optional. The name of the entity type, prefixed with `@`, that describes values of the parameter. If the parameter is required, this must be provided.
Optional. Indicates whether the parameter is required. That is, whether the intent cannot be completed without collecting the parameter value.
Optional. The collection of prompts that the agent can present to the user in order to collect a value for the parameter.
Optional. Indicates whether the parameter represents a list of values.
Represents an example that the agent is trained on.
Used in:
Output only. The unique identifier of this training phrase.
Required. The type of the training phrase.
Required. The ordered list of training phrase parts. The parts are concatenated in order to form the training phrase. Note: The API does not automatically annotate training phrases like the Dialogflow Console does. Note: Do not forget to include whitespace at part boundaries, so the training phrase is well formatted when the parts are concatenated. If the training phrase does not need to be annotated with parameters, you just need a single part with only the [Part.text][google.cloud.dialogflow.v2.Intent.TrainingPhrase.Part.text] field set. If you want to annotate the training phrase, you must create multiple parts, where the fields of each part are populated in one of two ways: - `Part.text` is set to a part of the phrase that has no parameters. - `Part.text` is set to a part of the phrase that you want to annotate, and the `entity_type`, `alias`, and `user_defined` fields are all set.
Optional. Indicates how many times this example was added to the intent. Each time a developer adds an existing sample by editing an intent or training, this counter is increased.
Represents a part of a training phrase.
Used in:
Required. The text for this part.
Optional. The entity type name prefixed with `@`. This field is required for annotated parts of the training phrase.
Optional. The parameter name for the value extracted from the annotated part of the example. This field is required for annotated parts of the training phrase.
Optional. Indicates whether the text was manually annotated. This field is set to true when the Dialogflow Console is used to manually annotate the part. When creating an annotated part with the API, you must set this to true.
Represents different types of training phrases.
Used in:
Not specified. This value should never be used.
Examples do not contain @-prefixed entity type names, but example parts can be annotated with entity types.
Templates are not annotated with entity types, but they can contain @-prefixed entity type names as substrings. Template mode has been deprecated. Example mode is the only supported way to create new training phrases. If you have existing training phrases that you've created in template mode, those will continue to work.
Represents the different states that webhooks can be in.
Used in:
Webhook is disabled in the agent and in the intent.
Webhook is enabled in the agent and in the intent.
Webhook is enabled in the agent and in the intent. Also, each slot filling prompt is forwarded to the webhook.
This message is a wrapper around a collection of intents.
Used in:
A collection of intents.
Represents the options for views of an intent. An intent can be a sizable object. Therefore, we provide a resource view that does not return training phrases in the response by default.
Used in:
, , , ,Training phrases field is not populated in the response.
All fields are populated.
Represents the contents of the original request that was passed to the `[Streaming]DetectIntent` call.
Used in:
The source of this request, e.g., `google`, `facebook`, `slack`. It is set by Dialogflow-owned servers.
Optional. The version of the protocol used for this request. This field is AoG-specific.
Optional. This field is set to the value of the `QueryParameters.payload` field passed in the request. Some integrations that query a Dialogflow agent may provide additional information in the payload. In particular for the Telephony Gateway this field has the form: <pre>{ "telephony": { "caller_id": "+18558363987" } }</pre> Note: The caller ID field (`caller_id`) will be redacted for Standard Edition agents and populated with the caller ID in [E.164 format](https://en.wikipedia.org/wiki/E.164) for Enterprise Edition agents.
Instructs the speech synthesizer on how to generate the output audio content.
Used in:
, , ,Required. Audio encoding of the synthesized audio content.
Optional. The synthesis sample rate (in hertz) for this audio. If not provided, then the synthesizer will use the default sample rate based on the audio encoding. If this is different from the voice's natural sample rate, then the synthesizer will honor this request by converting to the desired sample rate (which might result in worse audio quality).
Optional. Configuration of how speech should be synthesized.
Audio encoding of the output audio format in Text-To-Speech.
Used in:
Not specified.
Uncompressed 16-bit signed little-endian samples (Linear PCM). Audio content returned as LINEAR16 also contains a WAV header.
MP3 audio.
Opus encoded audio wrapped in an ogg container. The result will be a file which can be played natively on Android, and in browsers (at least Chrome and Firefox). The quality of the encoding is considerably higher than MP3 while using approximately the same bitrate.
Represents the query input. It can contain either: 1. An audio config which instructs the speech recognizer how to process the speech audio. 2. A conversational query in the form of text,. 3. An event that specifies which intent to trigger.
Used in:
,Required. The input specification.
Instructs the speech recognizer how to process the speech audio.
The natural language text to be processed.
The event to be processed.
Represents the parameters of the conversational query.
Used in:
,Optional. The time zone of this conversational query from the [time zone database](https://www.iana.org/time-zones), e.g., America/New_York, Europe/Paris. If not provided, the time zone specified in agent settings is used.
Optional. The geo location of this conversational query.
Optional. The collection of contexts to be activated before this query is executed.
Optional. Specifies whether to delete all contexts in the current session before the new ones are activated.
Optional. Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query.
Optional. This field can be used to pass custom data into the webhook associated with the agent. Arbitrary JSON objects are supported.
Optional. Configures the type of sentiment analysis to perform. If not provided, sentiment analysis is not performed.
Represents the result of conversational query or event processing.
Used in:
, ,The original conversational query text: - If natural language text was provided as input, `query_text` contains a copy of the input. - If natural language speech audio was provided as input, `query_text` contains the speech recognition result. If speech recognizer produced multiple alternatives, a particular one is picked. - If automatic spell correction is enabled, `query_text` will contain the corrected user input.
The language that was triggered during intent detection. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes.
The Speech recognition confidence between 0.0 and 1.0. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is not guaranteed to be accurate or set. In particular this field isn't set for StreamingDetectIntent since the streaming endpoint has separate confidence estimates per portion of the audio in StreamingRecognitionResult.
The action name from the matched intent.
The collection of extracted parameters.
This field is set to: - `false` if the matched intent has required parameters and not all of the required parameter values have been collected. - `true` if all required parameter values have been collected, or if the matched intent doesn't contain any required parameters.
The text to be pronounced to the user or shown on the screen. Note: This is a legacy field, `fulfillment_messages` should be preferred.
The collection of rich messages to present to the user.
If the query was fulfilled by a webhook call, this field is set to the value of the `source` field returned in the webhook response.
If the query was fulfilled by a webhook call, this field is set to the value of the `payload` field returned in the webhook response.
The collection of output contexts. If applicable, `output_contexts.parameters` contains entries with name `<parameter name>.original` containing the original parameter values before the query.
The intent that matched the conversational query. Some, not all fields are filled in this message, including but not limited to: `name`, `display_name`, `end_interaction` and `is_fallback`.
The intent detection confidence. Values range from 0.0 (completely uncertain) to 1.0 (completely certain). This value is for informational purpose only and is only used to help match the best intent within the classification threshold. This value may change for the same end-user expression at any time due to a model retraining or change in implementation. If there are `multiple knowledge_answers` messages, this value is set to the greatest `knowledgeAnswers.match_confidence` value in the list.
The free-form diagnostic info. For example, this field could contain webhook call latency. The string keys of the Struct's fields map can change without notice.
The sentiment analysis result, which depends on the `sentiment_analysis_request_config` specified in the request.
The sentiment, such as positive/negative feeling or association, for a unit of analysis, such as the query text.
Used in:
Sentiment score between -1.0 (negative sentiment) and 1.0 (positive sentiment).
A non-negative number in the [0, +inf) range, which represents the absolute magnitude of sentiment, regardless of score (positive or negative).
Configures the types of sentiment analysis to perform.
Used in:
Optional. Instructs the service to perform sentiment analysis on `query_text`. If not provided, sentiment analysis is not performed on `query_text`.
The result of sentiment analysis as configured by `sentiment_analysis_request_config`.
Used in:
The sentiment analysis result for `query_text`.
Represents a session entity type. Extends or replaces a developer entity type at the user session level (we refer to the entity types defined at the agent level as "developer entity types"). Note: session entity types apply to all queries, regardless of the language.
Used as response type in: SessionEntityTypes.CreateSessionEntityType, SessionEntityTypes.GetSessionEntityType, SessionEntityTypes.UpdateSessionEntityType
Used as field type in:
, , , ,Required. The unique identifier of this session entity type. Format: `projects/<Project ID>/agent/sessions/<Session ID>/entityTypes/<Entity Type Display Name>`. `<Entity Type Display Name>` must be the display name of an existing entity type in the same agent that will be overridden or supplemented.
Required. Indicates whether the additional data should override or supplement the developer entity type definition.
Required. The collection of entities associated with this session entity type.
The types of modifications for a session entity type.
Used in:
Not specified. This value should be never used.
The collection of session entities overrides the collection of entities in the corresponding developer entity type.
The collection of session entities extends the collection of entities in the corresponding developer entity type. Note: Even in this override mode calls to `ListSessionEntityTypes`, `GetSessionEntityType`, `CreateSessionEntityType` and `UpdateSessionEntityType` only return the additional entities added in this session entity type. If you want to get the supplemented list, please call [EntityTypes.GetEntityType][google.cloud.dialogflow.v2.EntityTypes.GetEntityType] on the developer entity type and merge.
Variant of the specified [Speech model][google.cloud.dialogflow.v2.InputAudioConfig.model] to use. See the [Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/enhanced-models) for which models have different variants. For example, the "phone_call" model has both a standard and an enhanced variant. When you use an enhanced model, you will generally receive higher quality results than for a standard model.
Used in:
No model variant specified. In this case Dialogflow defaults to USE_BEST_AVAILABLE.
Use the best available variant of the [Speech model][InputAudioConfig.model] that the caller is eligible for. Please see the [Dialogflow docs](https://cloud.google.com/dialogflow/docs/data-logging) for how to make your project eligible for enhanced models.
Use standard model variant even if an enhanced model is available. See the [Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/enhanced-models) for details about enhanced models.
Use an enhanced model variant: * If an enhanced variant does not exist for the given [model][google.cloud.dialogflow.v2.InputAudioConfig.model] and request language, Dialogflow falls back to the standard variant. The [Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/enhanced-models) describes which models have enhanced variants. * If the API caller isn't eligible for enhanced models, Dialogflow returns an error. Please see the [Dialogflow docs](https://cloud.google.com/dialogflow/docs/data-logging) for how to make your project eligible.
Gender of the voice as described in [SSML voice element](https://www.w3.org/TR/speech-synthesis11/#edef_voice).
Used in:
An unspecified gender, which means that the client doesn't care which gender the selected voice will have.
A male voice.
A female voice.
A gender-neutral voice.
Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance. Example: 1. transcript: "tube" 2. transcript: "to be a" 3. transcript: "to be" 4. transcript: "to be or not to be" is_final: true 5. transcript: " that's" 6. transcript: " that is" 7. message_type: `END_OF_SINGLE_UTTERANCE` 8. transcript: " that is the question" is_final: true Only two of the responses contain final results (#4 and #8 indicated by `is_final: true`). Concatenating these generates the full transcript: "to be or not to be that is the question". In each response we populate: * for `TRANSCRIPT`: `transcript` and possibly `is_final`. * for `END_OF_SINGLE_UTTERANCE`: only `message_type`.
Used in:
Type of the result message.
Transcript text representing the words that the user spoke. Populated if and only if `message_type` = `TRANSCRIPT`.
If `false`, the `StreamingRecognitionResult` represents an interim result that may change. If `true`, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for `message_type` = `TRANSCRIPT`.
The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is typically only provided if `is_final` is true and you should not rely on it being accurate or even set.
Type of the response message.
Used in:
Not specified. Should never be used.
Message contains a (possibly partial) transcript.
Event indicates that the server has detected the end of the user's speech utterance and expects no additional inputs. Therefore, the server will not process additional audio (although it may subsequently return additional results). The client should stop sending additional audio data, half-close the gRPC connection, and wait for any additional results until the server closes the gRPC connection. This message is only sent if `single_utterance` was set to `true`, and is not used otherwise.
Configuration of how speech should be synthesized.
Used in:
Optional. Speaking rate/speed, in the range [0.25, 4.0]. 1.0 is the normal native speed supported by the specific voice. 2.0 is twice as fast, and 0.5 is half as fast. If unset(0.0), defaults to the native 1.0 speed. Any other values < 0.25 or > 4.0 will return an error.
Optional. Speaking pitch, in the range [-20.0, 20.0]. 20 means increase 20 semitones from the original pitch. -20 means decrease 20 semitones from the original pitch.
Optional. Volume gain (in dB) of the normal native volume supported by the specific voice, in the range [-96.0, 16.0]. If unset, or set to a value of 0.0 (dB), will play at normal native signal amplitude. A value of -6.0 (dB) will play at approximately half the amplitude of the normal native signal amplitude. A value of +6.0 (dB) will play at approximately twice the amplitude of the normal native signal amplitude. We strongly recommend not to exceed +10 (dB) as there's usually no effective increase in loudness for any value greater than that.
Optional. An identifier which selects 'audio effects' profiles that are applied on (post synthesized) text to speech. Effects are applied on top of each other in the order they are given.
Optional. The desired voice of the synthesized audio.
Represents the natural language text to be processed.
Used in:
Required. The UTF-8 encoded natural language text to be processed. Text length must not exceed 256 characters.
Required. The language of this conversational query. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
Description of which voice to use for speech synthesis.
Used in:
Optional. The name of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and gender.
Optional. The preferred gender of the voice. If not set, the service will choose a voice based on the other parameters such as language_code and name. Note that this is only a preference, not requirement. If a voice of the appropriate gender is not available, the synthesizer should substitute a voice with a different gender rather than failing the request.
The request message for a webhook call.
The unique identifier of detectIntent request session. Can be used to identify end-user inside webhook implementation. Format: `projects/<Project ID>/agent/sessions/<Session ID>`, or `projects/<Project ID>/agent/environments/<Environment ID>/users/<User ID>/sessions/<Session ID>`.
The unique identifier of the response. Contains the same value as `[Streaming]DetectIntentResponse.response_id`.
The result of the conversational query or event processing. Contains the same value as `[Streaming]DetectIntentResponse.query_result`.
Optional. The contents of the original request that was passed to `[Streaming]DetectIntent` call.
The response message for a webhook call.
Optional. The text to be shown on the screen. This value is passed directly to `QueryResult.fulfillment_text`.
Optional. The collection of rich messages to present to the user. This value is passed directly to `QueryResult.fulfillment_messages`.
Optional. This value is passed directly to `QueryResult.webhook_source`.
Optional. This value is passed directly to `QueryResult.webhook_payload`. See the related `fulfillment_messages[i].payload field`, which may be used as an alternative to this field. This field can be used for Actions on Google responses. It should have a structure similar to the JSON message shown here. For more information, see [Actions on Google Webhook Format](https://developers.google.com/actions/dialogflow/webhook) <pre>{ "google": { "expectUserResponse": true, "richResponse": { "items": [ { "simpleResponse": { "textToSpeech": "this is a simple response" } } ] } } }</pre>
Optional. The collection of output contexts. This value is passed directly to `QueryResult.output_contexts`.
Optional. Makes the platform immediately invoke another `DetectIntent` call internally with the specified event as input. When this field is set, Dialogflow ignores the `fulfillment_text`, `fulfillment_messages`, and `payload` fields.
Optional. Additional session entity types to replace or extend developer entity types with. The entity synonyms apply to all languages and persist for the session of this query. Setting the session entity types inside webhook overwrites the session entity types that have been set through `DetectIntentRequest.query_params.session_entity_types`.