Get desktop application:
View/edit binary Protocol Buffers messages
The Google BigQuery Data Transfer API allows BigQuery users to configure transfer of their data from other Google Products into BigQuery. This service exposes methods that should be used by data source backend.
Update a transfer run. If successful, resets data_source.update_deadline_seconds timer.
A request to update a transfer run.
Run name must be set and correspond to an already existing run. Only state, error_status, and data_version fields will be updated. All other fields will be ignored.
Required list of fields to be updated in this request.
Log messages for a transfer run. If successful (at least 1 message), resets data_source.update_deadline_seconds timer.
A request to add transfer status messages to the run.
Name of the resource in the form: "projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}/runs/{run_id}"
Messages to append.
Notify the Data Transfer Service that data is ready for loading. The Data Transfer Service will start and monitor multiple BigQuery Load jobs for a transfer run. Monitored jobs will be automatically retried and produce log messages when starting and finishing a job. Can be called multiple times for the same transfer run.
A request to start and monitor a BigQuery load job.
Name of the resource in the form: "projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}/runs/{run_id}"
Import jobs which should be started and monitored.
User credentials which should be used to start/monitor BigQuery jobs. If not specified, then jobs are started using data source service account credentials. This may be OAuth token or JWT token.
The number of BQ Jobs that can run in parallel.
Notify the Data Transfer Service that the data source is done processing the run. No more status updates or requests to start/monitor jobs will be accepted. The run will be finalized by the Data Transfer Service when all monitored jobs are completed. Does not need to be called if the run is set to FAILED.
A request to finish a run.
Name of the resource in the form: "projects/{project_id}/locations/{location_id}/transferConfigs/{config_id}/runs/{run_id}"
Creates a data source definition. Calling this method will automatically use your credentials to create the following Google Cloud resources in YOUR Google Cloud project. 1. OAuth client 2. Pub/Sub Topics and Subscriptions in each supported_location_ids. e.g., projects/{project_id}/{topics|subscriptions}/bigquerydatatransfer.{data_source_id}.{location_id}.run The field data_source.client_id should be left empty in the input request, as the API will create a new OAuth client on behalf of the caller. On the other hand data_source.scopes usually need to be set when there are OAuth scopes that need to be granted by end users. 3. We need a longer deadline due to the 60 seconds SLO from Pub/Sub admin Operations. This also applies to update and delete data source definition.
Represents the request of the CreateDataSourceDefinition method.
The BigQuery project id for which data source definition is associated. Must be in the form: `projects/{project_id}/locations/{location_id}`
Data source definition.
Updates an existing data source definition. If changing supported_location_ids, triggers same effects as mentioned in "Create a data source definition."
Represents the request of the UpdateDataSourceDefinition method.
Data source definition.
Update field mask.
Deletes a data source definition, all of the transfer configs associated with this data source definition (if any) must be deleted first by the user in ALL regions, in order to delete the data source definition. This method is primarily meant for deleting data sources created during testing stage. If the data source is referenced by transfer configs in the region specified in the request URL, the method will fail immediately. If in the current region (e.g., US) it's not used by any transfer configs, but in another region (e.g., EU) it is, then although the method will succeed in region US, but it will fail when the deletion operation is replicated to region EU. And eventually, the system will replicate the data source definition back from EU to US, in order to bring all regions to consistency. The final effect is that the data source appears to be 'undeleted' in the US region.
Represents the request of the DeleteDataSourceDefinition method. All transfer configs associated with the data source must be deleted first, before the data source can be deleted.
The field will contain name of the resource requested, for example: `projects/{project_id}/locations/{location_id}/dataSourceDefinitions/{data_source_id}`
Retrieves an existing data source definition.
Represents the request of the GetDataSourceDefinition method.
The field will contain name of the resource requested.
Lists supported data source definitions.
Represents the request of the ListDataSourceDefinitions method.
The BigQuery project id for which data sources should be returned. Must be in the form: `projects/{project_id}/locations/{location_id}`
Pagination token, which can be used to request a specific page of `ListDataSourceDefinitionsRequest` list results. For multiple-page results, `ListDataSourceDefinitionsResponse` outputs a `next_page` token, which can be used as the `page_token` value to request the next page of the list results.
Page size. The default page size is the maximum value of 1000 results.
Returns a list of supported data source definitions.
List of supported data source definitions.
Output only. The next-pagination token. For multiple-page list results, this token can be used as the `ListDataSourceDefinitionsRequest.page_token` to request the next page of the list results.
The Google BigQuery Data Transfer Service API enables BigQuery users to configure the transfer of their data from other Google Products into BigQuery. This service contains methods that are end user exposed. It backs up the frontend.
Retrieves a supported data source and returns its settings, which can be used for UI rendering.
A request to get data source info.
Required. The field will contain name of the resource requested, for example: `projects/{project_id}/dataSources/{data_source_id}`
Lists supported data sources and returns their settings, which can be used for UI rendering.
Request to list supported data sources and their data transfer settings.
Required. The BigQuery project id for which data sources should be returned. Must be in the form: `projects/{project_id}`
Pagination token, which can be used to request a specific page of `ListDataSourcesRequest` list results. For multiple-page results, `ListDataSourcesResponse` outputs a `next_page` token, which can be used as the `page_token` value to request the next page of list results.
Page size. The default page size is the maximum value of 1000 results.
Returns list of supported data sources and their metadata.
List of supported data sources and their transfer settings.
Output only. The next-pagination token. For multiple-page list results, this token can be used as the `ListDataSourcesRequest.page_token` to request the next page of list results.
Creates a new data transfer configuration.
A request to create a data transfer configuration. If new credentials are needed for this transfer configuration, an authorization code must be provided. If an authorization code is provided, the transfer configuration will be associated with the user id corresponding to the authorization code. Otherwise, the transfer configuration will be associated with the calling user.
Required. The BigQuery project id where the transfer configuration should be created. Must be in the format projects/{project_id}/locations/{location_id} If specified location and location of the destination bigquery dataset do not match - the request will fail.
Required. Data transfer configuration to create.
Optional OAuth2 authorization code to use with this transfer configuration. This is required if new credentials are needed, as indicated by `CheckValidCreds`. In order to obtain authorization_code, please make a request to https://www.gstatic.com/bigquerydatatransfer/oauthz/auth?client_id=<datatransferapiclientid>&scope=<data_source_scopes>&redirect_uri=<redirect_uri> * client_id should be OAuth client_id of BigQuery DTS API for the given data source returned by ListDataSources method. * data_source_scopes are the scopes returned by ListDataSources method. * redirect_uri is an optional parameter. If not specified, then authorization code is posted to the opener of authorization flow window. Otherwise it will be sent to the redirect uri. A special value of urn:ietf:wg:oauth:2.0:oob means that authorization code should be returned in the title bar of the browser, with the page text prompting the user to copy the code and paste it in the application.
Optional version info. If users want to find a very recent access token, that is, immediately after approving access, users have to set the version_info claim in the token request. To obtain the version_info, users must use the "none+gsession" response type. which be return a version_info back in the authorization response which be be put in a JWT claim in the token request.
Updates a data transfer configuration. All fields must be set, even if they are not updated.
A request to update a transfer configuration. To update the user id of the transfer configuration, an authorization code needs to be provided.
Required. Data transfer configuration to create.
Optional OAuth2 authorization code to use with this transfer configuration. If it is provided, the transfer configuration will be associated with the authorizing user. In order to obtain authorization_code, please make a request to https://www.gstatic.com/bigquerydatatransfer/oauthz/auth?client_id=<datatransferapiclientid>&scope=<data_source_scopes>&redirect_uri=<redirect_uri> * client_id should be OAuth client_id of BigQuery DTS API for the given data source returned by ListDataSources method. * data_source_scopes are the scopes returned by ListDataSources method. * redirect_uri is an optional parameter. If not specified, then authorization code is posted to the opener of authorization flow window. Otherwise it will be sent to the redirect uri. A special value of urn:ietf:wg:oauth:2.0:oob means that authorization code should be returned in the title bar of the browser, with the page text prompting the user to copy the code and paste it in the application.
Required. Required list of fields to be updated in this request.
Optional version info. If users want to find a very recent access token, that is, immediately after approving access, users have to set the version_info claim in the token request. To obtain the version_info, users must use the "none+gsession" response type. which be return a version_info back in the authorization response which be be put in a JWT claim in the token request.
Deletes a data transfer configuration, including any associated transfer runs and logs.
A request to delete data transfer information. All associated transfer runs and log messages will be deleted as well.
Required. The field will contain name of the resource requested, for example: `projects/{project_id}/transferConfigs/{config_id}`
Returns information about a data transfer config.
A request to get data transfer information.
Required. The field will contain name of the resource requested, for example: `projects/{project_id}/transferConfigs/{config_id}`
Returns information about all data transfers in the project.
A request to list data transfers configured for a BigQuery project.
Required. The BigQuery project id for which data sources should be returned: `projects/{project_id}`.
When specified, only configurations of requested data sources are returned.
Pagination token, which can be used to request a specific page of `ListTransfersRequest` list results. For multiple-page results, `ListTransfersResponse` outputs a `next_page` token, which can be used as the `page_token` value to request the next page of list results.
Page size. The default page size is the maximum value of 1000 results.
The returned list of pipelines in the project.
Output only. The stored pipeline transfer configurations.
Output only. The next-pagination token. For multiple-page list results, this token can be used as the `ListTransferConfigsRequest.page_token` to request the next page of list results.
Creates transfer runs for a time range [start_time, end_time]. For each date - or whatever granularity the data source supports - in the range, one transfer run is created. Note that runs are created per UTC time in the time range. DEPRECATED: use StartManualTransferRuns instead.
A request to schedule transfer runs for a time range.
Required. Transfer configuration name in the form: `projects/{project_id}/transferConfigs/{config_id}`.
Required. Start time of the range of transfer runs. For example, `"2017-05-25T00:00:00+00:00"`.
Required. End time of the range of transfer runs. For example, `"2017-05-30T00:00:00+00:00"`.
A response to schedule transfer runs for a time range.
The transfer runs that were scheduled.
Start manual transfer runs to be executed now with schedule_time equal to current time. The transfer runs can be created for a time range where the run_time is between start_time (inclusive) and end_time (exclusive), or for a specific run_time.
A request to start manual transfer runs.
Transfer configuration name in the form: `projects/{project_id}/transferConfigs/{config_id}`.
The requested time specification - this can be a time range or a specific run_time.
Time range for the transfer runs that should be started.
Specific run_time for a transfer run to be started. The requested_run_time must not be in the future.
A response to start manual transfer runs.
The transfer runs that were created.
Returns information about the particular transfer run.
A request to get data transfer run information.
Required. The field will contain name of the resource requested, for example: `projects/{project_id}/transferConfigs/{config_id}/runs/{run_id}`
Deletes the specified transfer run.
A request to delete data transfer run information.
Required. The field will contain name of the resource requested, for example: `projects/{project_id}/transferConfigs/{config_id}/runs/{run_id}`
Returns information about running and completed jobs.
A request to list data transfer runs. UI can use this method to show/filter specific data transfer runs. The data source can use this method to request all scheduled transfer runs.
Required. Name of transfer configuration for which transfer runs should be retrieved. Format of transfer configuration resource name is: `projects/{project_id}/transferConfigs/{config_id}`.
When specified, only transfer runs with requested states are returned.
Pagination token, which can be used to request a specific page of `ListTransferRunsRequest` list results. For multiple-page results, `ListTransferRunsResponse` outputs a `next_page` token, which can be used as the `page_token` value to request the next page of list results.
Page size. The default page size is the maximum value of 1000 results.
Indicates how run attempts are to be pulled.
The returned list of pipelines in the project.
Output only. The stored pipeline transfer runs.
Output only. The next-pagination token. For multiple-page list results, this token can be used as the `ListTransferRunsRequest.page_token` to request the next page of list results.
Returns user facing log messages for the data transfer run.
A request to get user facing log messages associated with data transfer run.
Required. Transfer run name in the form: `projects/{project_id}/transferConfigs/{config_Id}/runs/{run_id}`.
Pagination token, which can be used to request a specific page of `ListTransferLogsRequest` list results. For multiple-page results, `ListTransferLogsResponse` outputs a `next_page` token, which can be used as the `page_token` value to request the next page of list results.
Page size. The default page size is the maximum value of 1000 results.
Message types to return. If not populated - INFO, WARNING and ERROR messages are returned.
The returned list transfer run messages.
Output only. The stored pipeline transfer messages.
Output only. The next-pagination token. For multiple-page list results, this token can be used as the `GetTransferRunLogRequest.page_token` to request the next page of list results.
Returns true if valid credentials exist for the given data source and requesting user. Some data sources doesn't support service account, so we need to talk to them on behalf of the end user. This API just checks whether we have OAuth token for the particular user, which is a pre-requisite before user can create a transfer config.
A request to determine whether the user has valid credentials. This method is used to limit the number of OAuth popups in the user interface. The user id is inferred from the API call context. If the data source has the Google+ authorization type, this method returns false, as it cannot be determined whether the credentials are already valid merely based on the user id.
Required. The data source in the form: `projects/{project_id}/dataSources/{data_source_id}`
A response indicating whether the credentials exist and are valid.
If set to `true`, the credentials exist and are valid.
Represents data source metadata. Metadata is sufficient to render UI and request proper OAuth tokens.
Used as response type in: DataTransferService.GetDataSource
Used as field type in:
,Output only. Data source resource name.
Data source id.
User friendly data source name.
User friendly data source description string.
Data source client id which should be used to receive refresh token.
Api auth scopes for which refresh token needs to be obtained. These are scopes needed by a data source to prepare data and ingest them into BigQuery, e.g., https://www.googleapis.com/auth/bigquery
Deprecated. This field has no effect.
Deprecated. This field has no effect.
The number of seconds to wait for an update from the data source before the Data Transfer Service marks the transfer as FAILED.
Default data transfer schedule. Examples of valid schedules include: `1st,3rd monday of month 15:30`, `every wed,fri of jan,jun 13:15`, and `first sunday of quarter 00:00`.
Specifies whether the data source supports a user defined schedule, or operates on the default schedule. When set to `true`, user can override default schedule.
Data source parameters.
Url for the help document for this data source.
Indicates the type of authorization.
Specifies whether the data source supports automatic data refresh for the past few days, and how it's supported. For some data sources, data might not be complete until a few days later, so it's useful to refresh data automatically.
Default data refresh window on days. Only meaningful when `data_refresh_type` = `SLIDING_WINDOW`.
Disables backfilling and manual run scheduling for the data source.
The minimum interval for scheduler to schedule runs.
The type of authorization needed for this data source.
Used in:
Type unspecified.
Use OAuth 2 authorization codes that can be exchanged for a refresh token on the backend.
Return an authorization code for a given Google+ page that can then be exchanged for a refresh token on the backend.
Represents how the data source supports data auto refresh.
Used in:
The data source won't support data auto refresh, which is default value.
The data source supports data auto refresh, and runs will be scheduled for the past few days. Does not allow custom values to be set for each transfer config.
The data source supports data auto refresh, and runs will be scheduled for the past few days. Allows custom values to be set for each transfer config.
Represents the data source definition.
Used as response type in: DataSourceService.CreateDataSourceDefinition, DataSourceService.GetDataSourceDefinition, DataSourceService.UpdateDataSourceDefinition
Used as field type in:
, ,The resource name of the data source definition. Data source definition names have the form `projects/{project_id}/locations/{location}/dataSourceDefinitions/{data_source_id}`.
Data source metadata.
The Pub/Sub topic to be used for broadcasting a message when a transfer run is created. Both this topic and transfer_config_pubsub_topic can be set to a custom topic. By default, both topics are auto-generated if none of them is provided when creating the definition. However, if one topic is manually set, the other topic has to be manually set as well. The only difference is that transfer_run_pubsub_topic must be a non-empty Pub/Sub topic, but transfer_config_pubsub_topic can be set to empty. The comments about "{location}" for transfer_config_pubsub_topic apply here too.
Duration which should be added to schedule_time to calculate run_time when job is scheduled. Only applicable for automatically scheduled transfer runs. Used to start a run early on a data source that supports continuous data refresh to compensate for unknown timezone offsets. Use a negative number to start a run late for data sources not supporting continuous data refresh.
Support e-mail address of the OAuth client's Brand, which contains the consent screen data.
When service account is specified, BigQuery will share created dataset with the given service account. Also, this service account will be eligible to perform status updates and message logging for data transfer runs for the corresponding data_source_id.
Is data source disabled? If true, data_source is not visible. API will also stop returning any data transfer configs and/or runs associated with the data source. This setting has higher priority than whitelisted_project_ids.
The Pub/Sub topic to use for broadcasting a message for transfer config. If empty, a message will not be broadcasted. Both this topic and transfer_run_pubsub_topic are auto-generated if none of them is provided when creating the definition. It is recommended to provide transfer_config_pubsub_topic if a user-owned transfer_run_pubsub_topic is provided. Otherwise, it will be set to empty. If "{location}" is found in the value, then that means, data source wants to handle message separately for datasets in different regions. We will replace {location} with the actual dataset location, as the actual topic name. For example, projects/connector/topics/scheduler-{location} could become projects/connector/topics/scheduler-us. If "{location}" is not found, then we will use the input value as topic name.
Supported location_ids used for deciding in which locations Pub/Sub topics need to be created. If custom Pub/Sub topics are used and they contains '{location}', the location_ids will be used for validating the topics by replacing the '{location}' with the individual location in the list. The valid values are the "location_id" field of the response of `GET https://bigquerydatatransfer.googleapis.com/v1/{name=projects/*}/locations` In addition, if the data source needs to support all available regions, supported_location_ids can be set to "global" (a single string element). When "global" is specified: 1) the data source implementation is supposed to stage the data in proper region of the destination dataset; 2) Data source developer should be aware of the implications (e.g., network traffic latency, potential charge associated with cross-region traffic, etc.) of supporting the "global" region;
Represents a data source parameter with validation rules, so that parameters can be rendered in the UI. These parameters are given to us by supported data sources, and include all needed information for rendering and validation. Thus, whoever uses this api can decide to generate either generic ui, or custom data source specific forms.
Used in:
Parameter identifier.
Parameter display name in the user interface.
Parameter description.
Parameter type.
Is parameter required.
Deprecated. This field has no effect.
Regular expression which can be used for parameter validation.
All possible values for the parameter.
For integer and double values specifies minimum allowed value.
For integer and double values specifies maxminum allowed value.
Deprecated. This field has no effect.
Description of the requirements for this field, in case the user input does not fulfill the regex pattern or min/max values.
URL to a help document to further explain the naming requirements.
Cannot be changed after initial creation.
Deprecated. This field has no effect.
If true, it should not be used in new transfers, and it should not be visible to users.
Parameter type.
Used in:
Type unspecified.
String parameter.
Integer parameter (64-bits). Will be serialized to json as string.
Double precision floating point parameter.
Boolean parameter.
Deprecated. This field has no effect.
Page ID for a Google+ Page.
Describes data which should be imported.
Used in:
SQL query to run. When empty, API checks that there is only one table_def specified and loads this table. Only Standard SQL queries are accepted. Legacy SQL is not allowed.
Table where results should be written.
The description of a destination table. This can be several sentences or paragraphs describing the table contents in detail.
When used WITHOUT the "sql" parameter, describes the schema of the destination table. When used WITH the "sql" parameter, describes tables with data stored outside of BigQuery.
Inline code for User-defined function resources. Ignored when "sql" parameter is empty.
Specifies the action if the destination table already exists.
Encoding of input data in CSV/JSON format.
Used in:
Default encoding (UTF8).
ISO_8859_1 encoding.
UTF8 encoding.
Defines schema of a field in the imported data.
Used in:
Field name. Matches: [A-Za-z_][A-Za-z_0-9]{0,127}
Field type
Is field repeated.
Description for this field.
Present iff type == RECORD.
LINT.IfChange Field type.
Used in:
Illegal value.
64K, UTF8.
64-bit signed.
64-bit IEEE floating point.
Aggregate type.
64K, Binary.
2-valued.
64-bit signed usec since UTC epoch.
Civil date - Year, Month, Day.
Civil time - Hour, Minute, Second, Microseconds.
Combination of civil date and civil time.
Numeric type with 38 decimal digits of precision and 9 decimal digits of scale.
Geography object (go/googlesql_geography).
Data format.
Used in:
Unspecified format. In this case, we have to infer the format from the data source.
CSV format.
Newline-delimited JSON.
Avro format. See http://avro.apache.org .
RecordIO.
ColumnIO.
Capacitor.
Parquet format. See https://parquet.apache.org .
ORC format. See https://orc.apache.org .
Describes schema of the data to be ingested.
Used in:
,One field per column in the record.
External table definition. These tables can be referenced with 'name' in the query and can be read just like any other table.
Used in:
BigQuery table_id (required). This will be used to reference this table in the query.
URIs for the data to be imported. All URIs must be from the same storage system.
Describes the format of the data in source_uri.
Specify the maximum number of bad records that can be ignored. If bad records exceed this threshold the query is aborted.
Character encoding of the input when applicable (CSV, JSON). Defaults to UTF8.
CSV specific options.
Optional schema for the data. When not specified for JSON and CSV formats we will try to detect it automatically.
Indicates if extra values that are not represented in the table schema is allowed.
CSV specific options.
Used in:
The delimiter. We currently restrict this to U+0001 to U+00FF and apply additional constraints during validation.
Whether CSV files are allowed to have quoted newlines. If quoted newlines are allowed, we can't split CSV files.
The quote character. We currently restrict this to U+0000 to U+00FF and apply additional constraints during validation. Set to '\0' to indicate no quote is used.
Number of leading rows to skip.
Accept rows that are missing trailing optional columns.
Represents which runs should be pulled.
Used in:
All runs should be returned.
Only latest run per day should be returned.
Options customizing the data transfer schedule.
Used in:
If true, automatic scheduling of data transfer runs for this configuration will be disabled. The runs can be started on ad-hoc basis using StartManualTransferRuns API. When automatic scheduling is disabled, the TransferConfig.schedule field will be ignored.
Specifies time to start scheduling transfer runs. The first run will be scheduled at or after the start time according to a recurrence pattern defined in the schedule string. The start time can be changed at any moment. The time when a data transfer can be trigerred manually is not limited by this option.
Defines time to stop scheduling transfer runs. A transfer run cannot be scheduled at or after the end time. The end time can be changed at any moment. The time when a data transfer can be trigerred manually is not limited by this option.
A specification for a time range, this will request transfer runs with run_time between start_time (inclusive) and end_time (exclusive).
Used in:
Start time of the range of transfer runs. For example, `"2017-05-25T00:00:00+00:00"`. The start_time must be strictly less than the end_time. Creates transfer runs where run_time is in the range betwen start_time (inclusive) and end_time (exlusive).
End time of the range of transfer runs. For example, `"2017-05-30T00:00:00+00:00"`. The end_time must not be in the future. Creates transfer runs where run_time is in the range betwen start_time (inclusive) and end_time (exlusive).
Represents a data transfer configuration. A transfer configuration contains all metadata needed to perform a data transfer. For example, `destination_dataset_id` specifies where data should be stored. When a new transfer configuration is created, the specified `destination_dataset_id` is created when needed and shared with the appropriate data source service account.
Used as response type in: DataTransferService.CreateTransferConfig, DataTransferService.GetTransferConfig, DataTransferService.UpdateTransferConfig
Used as field type in:
, ,The resource name of the transfer config. Transfer config names have the form of `projects/{project_id}/locations/{region}/transferConfigs/{config_id}`. The name is automatically generated based on the config_id specified in CreateTransferConfigRequest along with project_id and region. If config_id is not provided, usually a uuid, even though it is not guaranteed or required, will be generated for config_id.
The desination of the transfer config.
The BigQuery target dataset id.
User specified display name for the data transfer.
Data source id. Cannot be changed once data transfer is created.
Data transfer specific parameters.
Data transfer schedule. If the data source does not support a custom schedule, this should be empty. If it is empty, the default value for the data source will be used. The specified times are in UTC. Examples of valid format: `1st,3rd monday of month 15:30`, `every wed,fri of jan,jun 13:15`, and `first sunday of quarter 00:00`. See more explanation about the format here: https://cloud.google.com/appengine/docs/flexible/python/scheduling-jobs-with-cron-yaml#the_schedule_format NOTE: the granularity should be at least 8 hours, or less frequent.
Options customizing the data transfer schedule.
The number of days to look back to automatically refresh the data. For example, if `data_refresh_window_days = 10`, then every day BigQuery reingests data for [today-10, today-1], rather than ingesting data for just [today-1]. Only valid if the data source supports the feature. Set the value to 0 to use the default value.
Is this config disabled. When set to true, no runs are scheduled for a given transfer.
Output only. Data transfer modification time. Ignored by server on input.
Output only. Next time when data transfer will run.
Output only. State of the most recently updated transfer run.
Deprecated. Unique ID of the user on whose behalf transfer is done.
Output only. Region in which BigQuery dataset is located.
Represents a user facing message for a particular data transfer run.
Used in:
,Time when message was logged.
Message severity.
Message text.
Represents data transfer user facing message severity.
Used in:
,No severity specified.
Informational message.
Warning message.
Error message.
Represents a data transfer run.
Used as response type in: DataSourceService.UpdateTransferRun, DataTransferService.GetTransferRun
Used as field type in:
, , ,The resource name of the transfer run. Transfer run names have the form `projects/{project_id}/locations/{location}/transferConfigs/{config_id}/runs/{run_id}`. The name is ignored when creating a transfer run.
Minimum time after which a transfer run can be started.
For batch transfer runs, specifies the date and time of the data should be ingested.
Status of the transfer run.
Output only. Time when transfer run was started. Parameter ignored by server for input requests.
Output only. Time when transfer run ended. Parameter ignored by server for input requests.
Output only. Last time the data transfer run state was updated.
Output only. Data transfer specific parameters.
Data transfer destination.
Output only. The BigQuery target dataset id.
Output only. Data source id.
Data transfer run state. Ignored for input requests.
Deprecated. Unique ID of the user on whose behalf transfer is done.
Output only. Describes the schedule of this transfer run if it was created as part of a regular schedule. For batch transfer runs that are scheduled manually, this is empty. NOTE: the system might choose to delay the schedule depending on the current load, so `schedule_time` doesn't always match this.
Represents data transfer run state.
Used in:
, ,State placeholder.
Data transfer is scheduled and is waiting to be picked up by data transfer backend.
Data transfer is in progress.
Data transfer completed successfully.
Data transfer failed.
Data transfer is cancelled.
DEPRECATED. Represents data transfer type.
Used in:
Invalid or Unknown transfer type placeholder.
Batch data transfer.
Streaming data transfer. Streaming data source currently doesn't support multiple transfer configs per project.
Options for writing to the table. The WRITE_EMPTY option is intentionally excluded from the enum and is not supported by the data transfer service.
Used in:
The default writeDispostion
overwrites the table data.
the data is appended to the table. Note duplication might happen if this mode is used.