Get desktop application:
View/edit binary Protocol Buffers messages
Service for exporting data from TensorBoard.dev.
Stream the experiment_id of all the experiments owned by the caller.
Request to stream the experiment_id of all the experiments owned by the caller from TensorBoard.dev.
Timestamp to get a consistent snapshot of the data in the database. This is useful when making multiple read RPCs and needing the data to be consistent across the read calls.
User ID defaults to the caller, but may be set to a different user for internal Takeout processes operating on behalf of a user.
Limits the number of experiment IDs returned. This is useful to check if user might have any data by setting limit=1. Also useful to preview the list of experiments. TODO(@karthikv2k): Support pagination.
Field mask for what experiment data to return via the `experiments` field on the response. If not specified, this should be interpreted the same as an empty message: i.e., only the experiment ID should be returned. Other fields of `Experiment` will be populated if their corresponding bits in the `ExperimentMask` are set. The server may choose to populate fields that are not explicitly requested.
Streams experiment metadata (ID, creation time, etc.) from TensorBoard.dev.
Deprecated in favor of `experiments`. If a response has `experiments` set, clients should ignore `experiment_ids` entirely. Otherwise, clients should treat `experiment_ids` as a list of `experiments` for which only the `experiment_id` field is set, with the understanding that the other fields were not populated regardless of the requested field mask. For example, the following responses should be treated the same: # Response 1 experiment_ids: "123" experiment_ids: "456" # Response 2 experiments { experiment_id: "123" } experiments { experiment_id: "456" } # Response 3 experiment_ids: "789" experiments { experiment_id: "123" } experiments { experiment_id: "456" } See documentation on `experiments` for batching semantics.
List of experiments owned by the user. The entire list of experiments owned by the user is streamed in batches and each batch contains a list of experiments. A consumer of this stream needs to concatenate all these lists to get the full response. The order of experiments in the stream is not defined. Every response will contain at least one experiment. These messages may be partially populated, in accordance with the field mask given in the request.
Stream scalars for all the runs and tags in an experiment.
Request to stream scalars from all the runs and tags in an experiment.
The permanent ID of the experiment whose data need to be streamed.
Timestamp to get a consistent snapshot of the data in the database. This is useful when making multiple read RPCs and needing the data to be consistent across the read calls. Should be the same as the read timestamp used for the corresponding `StreamExperimentsRequest` for consistency.
Streams data from all the runs and tags in an experiment. Each stream result only contains data for a single tag from a single run. For example if there are five runs and each run had two tags, the RPC will return a stream of at least ten `StreamExperimentDataResponse`s, each one having the either scalars or tensors for one tag. The values from a single tag may be split among multiple responses. Users need to aggregate information from entire stream to get data for the entire experiment. Empty experiments will have zero stream results. Empty runs that doesn't have any tags need not be supported by a hosted service.
Name of the tag whose data is contained in this response.
Name of the run that contains the tag `tag_name`.
The metadata of the tag `tag_name`.
Data to store for the tag `tag_name.
Scalar data to store.
Tensor data to store.
Blob sequences to store.
Stream blob as chunks for a given blob_id.
The bytes in this chunk.
The position in the blob where this chunk begins. This must equal the sum of the sizes of the chunks sent so far. Ignored if no data is provided.
Indicates that this is the last chunk of the stream.
CRC32C of the entire blob. This should be set when final_chunk=True, to protect against data corruption.
Service for writing data to TensorBoard.dev.
Request for a new location to write TensorBoard readable events.
This is currently empty on purpose. No information is necessary to request a URL, except. authorization of course, which doesn't come within the proto.
User provided name of the experiment.
User provided description of the experiment, in markdown source format.
Carries all information necessary to: 1. Inform the user where to navigate to see their TensorBoard. 2. Subsequently load (Scalars, Tensors, etc.) to the specified location.
Service-wide unique identifier of an uploaded log dir. eg: "1r9d0kQkh2laODSZcQXWP"
Url the user should navigate to to see their TensorBoard eg: "https://example.com/public/1r9d0kQkh2laODSZcQXWP"
Request to mutate metadata associated with an experiment.
Request to change the metadata of one experiment.
Description of the data to set. The experiment_id field must match an experiment_id in the database. The remaining fields should be set to the desired metadata to be written. Only those fields marked True in the experiment_mask will be written. The service may deny modification of some metadata used for internal bookkeeping, such as num_scalars, etc.
Field mask for what experiment data to set. The service may deny requests to set some metatadata.
Response for setting experiment metadata.
This is empty on purpose.
(message has no fields)
Request that an experiment be deleted, along with all tags and scalars that it contains. This call may only be made by the original owner of the experiment.
Service-wide unique identifier of an uploaded log dir. eg: "1r9d0kQkh2laODSZcQXWP"
This is empty on purpose.
(message has no fields)
Request that unreachable data be purged. Used only for testing; disabled in production.
Only used for testing; corresponding RPC is disabled in prod.
Maximum number of entities of a given kind to purge at once (e.g., maximum number of tags to purge). Required; must be positive.
Only used for testing; corresponding RPC is disabled in prod.
Stats about how many elements where purged. Compare to the batch limit specified in the request to estimate whether the backlog has any more items.
Request additional scalar data be stored in TensorBoard.dev.
Carries all that is needed to add additional run data to the hosted service.
Which experiment to write to - corresponding to one hosted TensorBoard URL. The requester must have authorization to write to this location.
Data to append to the existing storage at the experiment_id.
Everything the caller needs to know about how the writing went. (Currently empty)
This is empty on purpose.
(message has no fields)
Request additional tensor data be stored in TensorBoard.dev.
Carries all that is needed to add tensor data to the hosted service.
Which experiment to write to - corresponding to one hosted TensorBoard URL. The requester must have authorization to write to this location.
Data to append to the existing storage at the experiment_id.
Everything the caller needs to know about how the writing went. (Currently empty)
This is empty on purpose.
(message has no fields)
Request to obtain a specific BlobSequence entry, creating it if needed, to be subsequently populated with blobs.
Obtain a unique ID for a blob sequence, given the composite key (experiment_id, run, tag, step). If such a blob sequence already exists, return its ID. If not, create it first, and return the new ID.
Service-wide unique identifier of an uploaded log dir. eg: "1r9d0kQkh2laODSZcQXWP"
The name of the run to which the blob sequence belongs, for example "/some/path/mnist_experiments/run1/".
The name of the tag to which the blob sequence belongs, for example "loss".
Step index within the run.
Timestamp of the creation of this blob sequence.
The total number of elements expected in the sequence. This effectively delares a number of initially empty 'upload slots', to be filled with subsequent WriteBlob RPCs.
Note that metadata.plugin_data.content does not carry the payload.
A unique ID for the the requested blob sequence.
Request the current status of blob data being stored in TensorBoard.dev, to support resumable uploads.
The ID of the BlobSequence of which this blob is a member.
The position of this Blob within the BlobSequence
State of the object (still appending vs. complete).
Size of the object in bytes. In the case of a partial upload, this reflects only the data actually received so far.
crc32c of the blob data stored so far, i.e. over the byte range [0, size).
Request additional blob data be stored in TensorBoard.dev.
A single chunk of the blob write stream. Note that the WriteBlobRequest does not mirror the nested structure of WriteScalarRequest, because we only ever send one blob at a time.
The ID of the BlobSequence of which this blob is a member.
The position of this Blob within the BlobSequence
The bytes in this chunk.
The position in the blob where this chunk begins. This must equal the sum of the sizes of the chunks sent so far. Ignored if no data is provided.
CRC32C of current data buffer. Clients must include the crc32c for every data buffer, to protect against data corruption. Note that for multi-shot writes, specifying the crc32c for every data buffer provides stronger protection than just providing the final_crc32c at the end of the upload.
Indicates that this is the last chunk of the stream.
CRC32C of the entire blob. Required, to protect against data corruption. This should be set only when finalize_object=True.
Size in bytes of the entire blob. Required in the first request to allow quota allocation and management.
State of the object (still appending vs. complete).
Size of the object in bytes. This is the sum of the chunk sizes received from the stream so far. In the response to the final chunk, this size should equal the total size of the blob.
Request that the calling user and all their data be permanently deleted. Used for testing purposes.
Requests that the calling user and all their data be permanently deleted.
This is empty on purpose.
(message has no fields)
Everything the caller needs to know about how the deletion went.
This is empty on purpose.
(message has no fields)
Used in:
gRPC server URI: <https://github.com/grpc/grpc/blob/master/doc/naming.md>. A scheme should always be specified, even if that is `dns`. For example: "dns:///api.tensorboard.dev:443".
Used in:
A non-empty ID for the blob.
Used in:
Used in:
Optional. If absent, this represents a "hole" in the sequence: there is expected to be a blob here, but upload has not started.
Used in:
, ,Object state is unknown. This value should never be used; it is present only as a proto3 best practice. See https://developers.google.com/protocol-buffers/docs/proto3#enum
Object is being written and not yet finalized.
Object is finalized.
Used in:
Human-readable message to display. When non-empty, will be displayed in all cases, even when the client may proceed.
Used in:
All is well. The client may proceed.
The client may proceed, but should heed the accompanying message. This may be the case if the user is on a version of TensorBoard that will soon be unsupported, or if the server is experiencing transient issues.
The client should cease further communication with the server and abort operation after printing the accompanying `details` message.
Resource message representing an Experiment.
Used in:
,Permanent ID of this experiment; e.g.: "AdYd1TgeTlaLWXx6I8JUbA". Output-only.
The time that the experiment was created. Output-only.
The time that the experiment was last modified: i.e., the most recent time that scalars were added to the experiment. Output-only.
The number of scalars in this experiment, across all time series. Output-only.
The number of distinct run names in this experiment. Output-only.
The number of distinct tag names in this experiment. A tag name that appears in multiple runs will be counted only once. Output-only.
User provided name of the experiment.
User provided description of the experiment, in markdown source format.
The number of bytes used for storage of tensors in this experiment, across all time series, including estimated overhead. Output-only.
The number of bytes used for storage of the contents of blobs in this experiment, across all time series, including estimated overhead. Output-only.
The owner of this experiment, as an opaque user_id string. This field is ignored on upload, with owner information coming from the authentication information passed alongside the proto.
Field mask for `Experiment` used in get and update RPCs. The `experiment_id` field is always implicitly considered to be set.
Used in:
,Used in:
Template string for experiment URLs. All occurrences of the value of the `id_placeholder` field in this template string should be replaced with an experiment ID. For example, if `id_placeholder` is "{{EID}}", then `template` might be "https://tensorboard.dev/experiment/{{EID}}/". Should be absolute.
Placeholder string that should be replaced with an actual experiment ID. (See docs for `template` field.)
Used in:
Plugins for which data should be uploaded. These are plugin names as stored in the the `SummaryMetadata.plugin_data.plugin_name` proto field.
Used in:
Plugins for which the client wishes to upload data. These are plugin names as stored in the the `SummaryMetadata.plugin_data.plugin_name` proto field.
Details about what actions were taken as a result of a purge request. These values are upper bounds; they may exceed the true values.
Used in:
Number of tags deleted as a result of this request.
Number of experiments marked as purged as a result of this request.
Number of users deleted as a result of this request.
One point viewable on a scalar metric plot.
Used in:
Step index within the run.
Timestamp of the creation of this point.
Value of the point at this step / timestamp.
Metadata for the ScalarPoints stored for one (Experiment, Run, Tag).
Maximum step recorded for the tag.
Timestamp corresponding to the max step.
Information about the plugin which created this scalar data. Note: The period is required part of the type here due to the package name resolution logic.
Request sent by uploader clients at the start of an upload session. Used to determine whether the client is recent enough to communicate with the server, and to receive any metadata needed for the upload session.
Client-side TensorBoard version, per `tensorboard.version.VERSION`.
Information about the plugins for which the client wishes to upload data. If specified then the list of plugins will be confirmed by the server and echoed in the PluginControl.allowed_plugins field. Otherwise the server will return the default set of plugins it supports. If one of the plugins is not supported by the server then it will respond with compatibility verdict VERDICT_ERROR.
Primary bottom-line: is the server compatible with the client, can it serve its request, and is there anything that the end user should be aware of?
Identifier for a gRPC server providing the `TensorBoardExporterService` and `TensorBoardWriterService` services (under the `tensorboard.service` proto package).
How to generate URLs to experiment pages.
Information about the plugins for which data should be uploaded. If PluginSpecification.requested_plugins is specified then that list of plugins will be confirmed by the server and echoed in the the response. Otherwise the server will return the default set of plugins it supports. The client should only upload data for the plugins in the response even if it is capable of uploading more data.
Limits on the upload process that the client should honor. This may include limits on data size, chunk size, records per request, request rate, bandwidth, etc. The server may enforce such limits, in which case the values reported here should be kept in sync. Providing these limits in advance allows the client to avoid triggering server errors (both through more efficent operation and by earlier detection of real error conditions), and to print better error messages. If this field is not set, the client should choose reasonable default values to guide its behavior.
Used in:
Step index within the run.
Timestamp of the creation of this point.
Value of the blob sequence at this step / timestamp.
Data for the scalars are stored in a columnar fashion to optimize it for exporting the data into textual formats like JSON. The data for the ith scalar is { steps[i], wall_times[i], values[i] }. The data here is sorted by step values in ascending order.
Used in:
Step index within the run.
Timestamp of the creation of this point.
Value of the point at this step / timestamp.
Data for the Tensors are stored in a columnar fashion to optimize it for exporting the data into textual formats like JSON. The data for the ith tensor is { steps[i], wall_times[i], values[i] }. The data here is sorted by step values in ascending order.
Used in:
Step index within the run.
Timestamp of the creation of this point.
Value of the point at this step / timestamp.
One point viewable on a tensor metric plot.
Used in:
Step index within the run.
Timestamp of the creation of this point.
Value of the point at this step / timestamp.
Metadata for the TensorPoints stored for one (Experiment, Run, Tag).
Maximum step recorded for the tag.
Timestamp corresponding to the max step.
Information about the plugin which created this tensor data. Note: The period is required part of the type here due to the package name resolution logic.
Used in:
The maximum allowed WriteScalar request size, in bytes. If this is 0 or unset, client should use a reasonable default value. If this is negative, no scalars should be uploaded.
The maximum allowed WriteTensor request size, in bytes. If this is 0 or unset, client should use a reasonable default value. If this is negative, no tensors should be uploaded.
The maximum allowed WriteBlob request size, in bytes. If this is 0 or unset, client should use a reasonable default value. If this is negative, no blobs should be uploaded.
The minimum interval between WriteScalar requests, in milliseconds. If this is 0 or unset, client should use a reasonable default value.
The minimum interval between WriteTensor requests, in milliseconds. If this is 0 or unset, client should use a reasonable default value.
The minimum interval between WriteBlob requests, in milliseconds. If this is 0 or unset, client should use a reasonable default value.
The maximum allowed size for blob uploads. If this is 0 or unset, client should use a reasonable default value. If this is negative, no blobs should be uploaded.
The maximum allowed size for tensor point uploads. If this is 0 or unset, client should use a reasonable default value. If this is negative, no blobs should be uploaded.
All the data to store for one Run. This data will be stored under the corresponding run in the hosted storage. WriteScalarRequest is merged into the data store for the keyed run. The tags and included scalars will be the union of the data sent across all WriteScalarRequests. Metadata by default uses a 'first write wins' approach.
Used in:
The name of this run. For example "/some/path/mnist_experiments/run1/"
Data to store for this Run/Tag combination.
All the data to store for one Tag of one Run. This data will be stored under the corresponding run/tag in the hosted storage. A tag corresponds to a single time series.
Used in:
The name of this tag. For example "loss"
Data to store for this Run/Tag combination.
The metadata of this tag.
All the data to store for one Run. This data will be stored under the corresponding run in the hosted storage. WriteTensorRequest is merged into the data store for the keyed run. The tags and included tensors will be the union of the data sent across all WriteTensorRequests. Metadata by default uses a 'first write wins' approach.
Used in:
The name of this run. For example "/some/path/mnist_experiments/run1/"
Data to store for this Run/Tag combination.
All the data to store for one Tag of one Run. This data will be stored under the corresponding run/tag in the hosted storage. A tag corresponds to a single time series.
Used in:
The name of this tag. For example "loss"
Data to store for this Run/Tag combination.
The metadata of this tag.