Proto commits in tensorflow/tensorboard

These commits are when the Protocol Buffers files have changed: (only the last 100 relevant commits are shown)

Commit:6bc843e
Author:Adrian RC
Committer:Adrian RC

Update TF compat protos. (cherry picked from commit 95315831f19c99951dc3e973861cc2d787a40f0a)

The documentation is generated from this commit.

Commit:e31a6f8
Author:Adrian RC
Committer:Adrian RC

Update TF compat protos.

The documentation is generated from this commit.

Commit:501cda8
Author:Adrian RC
Committer:GitHub

Update TF compat protos (#6914) Routine update of TF compat protos, as described in [this message](https://github.com/tensorflow/tensorboard/blob/52530fa0ff253db305a2c83ddb0e5ecee8143467/tensorboard/compat/proto/proto_test.py#L171). This change syncs the protos to the latest nightly, which is the first one right after the branch cut for the new release 2.18.0.

Commit:1438f6f
Author:Adrian RC
Committer:Adrian RC

Update TF compat protos.

Commit:dcb1bb6
Author:Adrian RC
Committer:GitHub

Updates compat protos with latest snapshot from TF (#6744) Just updating some proto definitions for compatibility with no-TF mode.

Commit:ab4858f
Author:Adrian RC
Committer:Adrian RC

Updates compat TF protos by running our update.sh script.

Commit:5bf72e0
Author:Yating
Committer:GitHub

Hparams: Support `limit` for `DataProvider.list_hyperparameters()`. (#6569) Support `limit` for `DataProvider.list_hyperparameters()`. Note that this PR does not support limiting the hparams returned from `DataProvider.list_tensors()`. Googlers, see cl/559192614 and cl/563165193 for more context. #hparams

Commit:34bfbd9
Author:Yating
Committer:GitHub

Hparams: Add `include_in_result` field and implement support for `hparams_to_include` (#6559) This PR extracts the filtering logic for hparams only (metrics filtering will be done later) from bmd3k's [commit](https://github.com/bmd3k/tensorboard/commit/ab75cd1812097f5ca54085e68a34a8cfa346a60b). Googlers, see comments in cl/559852837 for more context. Test internally at cl/563163418. #hparams

Commit:2a91acc
Author:Brian Dubois
Committer:GitHub

Hparams: Support excluding metric information in HTTP requests. (#6556) There are some clients of the Hparams HTTP API that do not require the metric information. This includes the metric_infos usually returned in the /experiments request and the metric_values usually returned in the /session_groups request. Since these can be expensive to calculate, we want the option to not calculate and return them in the response. Add option `include_metrics` to both GetExperimentRequest and ListSessionGroupsRequest. If unspecified we treat `include_metrics` as True, for backward compatibility. Honor the `include_metrics` property in all three major cases: When experiment metadata is defined by Experiment tags, by Session tags, or by the DataProvider.

Commit:2a2439d
Author:Adrian RC
Committer:GitHub

Updates TF compat protos. (#6532)

Commit:6febb50
Author:Brian Dubois
Committer:GitHub

Hparams: Add differs field. (#6500) Add a `differs` field at both the DataProvider level (in Hyperparameter) and at the hparams API level (in HParamInfo). Also write logic to consume it in the hparams dashboard by sorting the hparams that differ to the top. There are, unfortunately, no tests in the repo for UI changes so I had to test the changes manually. I tested that an experiment with < 5 hparams renders all hparams and selects all hparams. I tested that an internal experiment (from an internal data provider implementation) that has > 5 hparams sorts hparams with `differs===true` to the top and that the first five are selected. I checked that attempts to set the order and to set filters succeed.

Commit:060fecb
Author:Brian Dubois
Committer:GitHub

Hparams: Change some handling/generation of hparams with discrete domains. (#6489) This addresses several issues and cleanup related to how we generate and handle hparams with discrete domains: 1. From the backend, always return the list of discrete domain values for a string hparam. Previously we were returning no discrete domain values when the number of values exceeded 10 but this was breaking assumptions in the Angular hparam logic. 2. In the polymer UI, generate a "regexp" filter for discrete string hparams with greater than 10 values. 3. Fix regexp filter display logic in the polymer UI. We were never successfully showing the regexp filter. The check `[[hparam.filter.regexp]]` was incorrect since it would evaluate to `false` when `regexp` is the empty string. Instead we must check that `hparam.filter.regex !== undefined`. 4. Simplify the logic for generating discrete domain filters in the polymer UI - we just need to do it in a single place now that we can assume that the backend will always return the list of discrete domain values (thanks to item (1) and to previous work done in go/tbpr/6393)

Commit:ff4b56a
Author:Riley Jones
Committer:Riley Jones

update tensorflow protos

Commit:78fa64f
Author:Riley Jones
Committer:Riley Jones

update tensorflow protos

Commit:8685981
Author:Brian Dubois
Committer:GitHub

Delete the NPMI plugin. (#6259) The NPMI plugin was an intern project but was never completed. It's time to delete it.

Commit:00d59e6
Author:Nick Felt
Committer:GitHub

chore: update compat protos to tensorflow/tensorflow@3e5b6f1 (#6191) This updates our copied protos to the TF 2.12 release branch cut commit, tensorflow/tensorflow@3e5b6f1899f8515b3f27c67b5bea7481e990c163.

Commit:1d6f252
Author:Adrian RC
Committer:Adrian RC

Updates compat proto copies by running our script to sync with TF protos.

Commit:5210ce3
Author:James
Committer:GitHub

update protos to match TF 2.11 and pin the CI version to TF 2.11 (#6029) * Motivation for features / changes This PR syncs the Tensorboard Protos to match the Tensorflow 2.11 release candidate. The required a change in our update script which is also in our master branch. This also pins the TF version in the CI to 2.11.

Commit:df448ad
Author:James
Committer:GitHub

Chore: Update protos to match latest TF proto's (#6010) * Motivation for features / changes This PR updates our proto files to match the latest Tensorflow protos. For posterity these protos were synced to Tensorflow at commit 20d085686bb99daefa930205e8ec69a99dae306d. Which was commited on November 1st 2022. * Technical description of changes These updates were done by running the tensorboard/compat/proto/update.sh script. The update to that file was to fix a little bug in the changes made in #5997

Commit:3c0a013
Author:Nick Groszewski
Committer:GitHub

proto: Sync to TensorFlow 0c2241bede2 (2022-10-27) (#6001) Pulls in source metadata additions added in tensorflow/tensorflow@0c2241bede2.

Commit:cd337cd
Author:Yating
Committer:GitHub

Update protos to match tensorflow 2.10.0rc0 (#5848)

Commit:33fffd4
Author:James
Committer:GitHub

Remove LINT check from copied proto files (#5693) * Update script and run it * fix tests * make LINT regex more specific

Commit:f2d419d
Author:Stanley Bileschi
Committer:GitHub

Update protos to match TF 2.9.0rc1 (#5686)

Commit:ce4587c
Author:Brian Dubois
Committer:GitHub

chore: Update tensorboard compat protos using tensorflow v2.8.0-rc0. (#5484) (#5485) Cherry-picked from master. In preparation for TensorBoard 2.8.0 release we update our compat protos to match tensorflow v2.8.0-rc0.

Commit:61d11d9
Author:Brian Dubois
Committer:GitHub

Update tensorboard compat protos using tensorflow v2.8.0-rc0. (#5484) In preparation for TensorBoard 2.8.0 release we update our compat protos to match tensorflow v2.8.0-rc0.

Commit:0837ae8
Author:Stanley Bileschi
Committer:GitHub

chore: Update protos to tensorflow/tensorflow@cfe7e5cfd41" (#5374) * Revert "chore: Update protos to tensorflow/tensorflow@659aa25f4c2 (#5373)" This reverts commit 8d24181d6a6692ebe968231bc796e2e18dd4decd. Protos were moved further ahead than is anticipated for TF 2.7 cut. Protos were moved to 659aa25f4c2 but we should only move to cfe7e5cfd412b0a64af43d3e226eb176905efb0b * Update protos to tensorflow@cfe7e5cfd41

Commit:8d24181
Author:Stanley Bileschi
Committer:GitHub

chore: Update protos to tensorflow/tensorflow@659aa25f4c2 (#5373)

Commit:c9e2b4d
Author:ericdnielsen
Committer:GitHub

Add owner field to experiment proto. (#5353) * Add optional repository_id to Experiment * reorder fields * Extend the experiment proto to allow returning owner information. * Remove unrelated edit. * Updated field comments.

Commit:2d8a7e2
Author:Brian Dubois
Committer:Brian Dubois

Pin compat protos to tensorflow v2.6.0-rc2 (#5195) While performing TensorBoard 2.6.0 release I ran into failures in the following target: `//tensorboard/compat/proto:proto_test` The instructions are pretty clear. However, I had to figure out how to add a new file to pin. I did that by manually copying the file from tensorflow to tensorboard before running the update. Here are the steps I took to generate the protos. In my tensorflow fork ~/git/tensorflow: ``` $ git checkout v2.6.0-rc2 ``` In my tensorboard fork ~/git/tensorboard: ``` $ git checkout -b pin-tf-2.6-protos-attempt2 $ cp ~/git/tensorflow/tensorflow/core/framework/full_type.proto tensorboard/compat/proto $ ./tensorboard/compat/proto/update.sh ~/git/tensorflow $ git add . ``` To test I ran a local tensorboard and verified at a high level that core plugins seem to continue to function. I later realized from running the CI that there is now an additional step to update these protos for RustBoard: ``` $ bazel run //tensorboard/data/server:update_protos ```

Commit:a852fc2
Author:Brian Dubois
Committer:GitHub

Pin compat protos to tensorflow v2.6.0-rc2 (#5195) While performing TensorBoard 2.6.0 release I ran into failures in the following target: `//tensorboard/compat/proto:proto_test` The instructions are pretty clear. However, I had to figure out how to add a new file to pin. I did that by manually copying the file from tensorflow to tensorboard before running the update. Here are the steps I took to generate the protos. In my tensorflow fork ~/git/tensorflow: ``` $ git checkout v2.6.0-rc2 ``` In my tensorboard fork ~/git/tensorboard: ``` $ git checkout -b pin-tf-2.6-protos-attempt2 $ cp ~/git/tensorflow/tensorflow/core/framework/full_type.proto tensorboard/compat/proto $ ./tensorboard/compat/proto/update.sh ~/git/tensorflow $ git add . ``` To test I ran a local tensorboard and verified at a high level that core plugins seem to continue to function. I later realized from running the CI that there is now an additional step to update these protos for RustBoard: ``` $ bazel run //tensorboard/data/server:update_protos ```

Commit:4273e36
Author:Stephan Lee
Committer:Stephan Lee

infra: update protos to TF at 2.5.0-rc0 (#4853) This change syncs proto from TensorFlow for our release and correctness purposes.

Commit:83cc57b
Author:Stephan Lee
Committer:GitHub

infra: update protos to TF at 2.5.0-rc0 (#4853) This change syncs proto from TensorFlow for our release and correctness purposes.

Commit:9f7981e
Author:William Chargin
Committer:GitHub

data_provider: add `GetExperiment` RPC (#4750) Summary: This new RPC is intended to back both the `data_location` and the `experiment_metadata` data provider methods (which really should be combined into one). It takes an experiment ID and returns all available metadata. If needed, we could add a fieldmask, but we leave that out until we have a need for it. The Rust implementation of the data server will only populate the data location field, but we still include the rest to mirror the Python interface. This is the first use of `google.protobuf.Timestamp` or any of the well-known types, so some build tweaks are needed. Test Plan: The protos and Rust code all build, and the newly updated instructions in `DEVELOPMENT.md` for still work. With `grpc_cli`, the new RPC can be seen to successfully return a non-OK `"not yet implemented"` status. wchargin-branch: data-getexperiment-rpc

Commit:76f8087
Author:William Chargin
Committer:GitHub

docs: require scheme for TBdev gRPC API endpoint (#4669) Summary: The [gRPC docs] indicate that the default scheme for a channel address is `dns:///`, but inside Google this is not always the case (Googlers, see <http://b/179805849>.) Thus, we now require that the API endpoint to the TensorBoard.dev gRPC server include an explicit scheme. The current prod servers don’t follow this, but they will soon following internal changes. [gRPC docs]: https://github.com/grpc/grpc/blob/master/doc/naming.md wchargin-branch: docs-grpc-scheme

Commit:6f16000
Author:William Chargin
Committer:GitHub

data_compat: mark converted image and audio data (#4618) Summary: In TensorFlow 1.x, image and audio data are represented with a separate time series for each batch item (`input_image/image/0`, etc.). In TensorFlow 2.x, a time series may include multiple batch items. This patch adds a field to image and audio metadata protos to indicate that they have been automatically converted (by `data_compat`) from TF 1.x to TF 2.x. This way, downstream code can tell that a TF 1.x time series may be part of a larger batch even though it only contains a single sample. Since this new annotation is a proto field that is `true` only for legacy data, summary metadata for TF 2.x summaries is unchanged. Test Plan: As end-to-end tests, verified that the images and audio dashboards both still work with TF 1.x data, and checked with `grpc_cli` that the data server returns summary metadata with non-empty `plugin_data.content`: ``` grpc_cli --channel_creds_type=local \ --protofiles tensorboard/data/proto/data_provider.proto \ call localhost:41905 TensorBoardDataProvider.ListBlobSequences \ 'plugin_filter { plugin_name: "audio" }' ``` wchargin-branch: mark-tf1x-image-audio

Commit:6d6ccc5
Author:William Chargin
Committer:GitHub

Update protos for TF 2021-01-25 head (#4600) Summary: Pulls in [TensorFlow commit b565087c5cd7][c], which updates the docs for the `DataClass` enum. Miscellaneous updates included as well. [c]: https://github.com/tensorflow/tensorflow/commit/b565087c5cd72fc222a0610011335a381d7f7ac7 Generated with: ``` $ tensorboard/compat/proto/update.sh ~/git/tensorflow $ bazel run //tensorboard/data/server:update_protos ``` Test Plan: Changes are all either backward-compatible or to experimental messages that we don’t use, so unit tests should suffice. wchargin-branch: protos-sync-b565087c5cd7

Commit:8491cc1
Author:William Chargin
Committer:GitHub

rust: remove demo protobuf (#4567) Summary: We now comfortably have a real proto and gRPC toolchain, so this special demo proto can be removed. Test Plan: xisting tests suffice (and ensure that this removes all usages). wchargin-branch: rust-rm-demopb

Commit:9da30cc
Author:William Chargin
Committer:GitHub

data: represent scalars as `f32` (#4395) Summary: In-memory and wire representations of scalars are now 32-bit rather than 64-bit floats. We no longer support reading `DT_DOUBLE` tensors, which is fine because TensorBoard has never written such tensors to event files. This is mostly a no-op on the Python side, except that Python `float`-to-`float` conversions that go through the data provider layer are now lossy. Per a comment of @nfelt on #4352. Test Plan: All the Rust changes follow from the types. Tests suffice, as usual. wchargin-branch: data-scalars-f32

Commit:2456f4c
Author:William Chargin
Committer:GitHub

data: add `ListPlugins` RPC (#4353) Summary: The data provider API uses `list_plugins` to populate the list of active dashboards. I forgot to add a corresponding RPC in #4314, since I’d hacked around it in my prototypes, but we do want to be able to implement this cleanly. Test Plan: It builds: `bazel build //tensorboard/data/proto:protos_all_py_pb2_grpc`. wchargin-branch: data-list-plugins-rpc

Commit:08ad9ce
Author:William Chargin
Committer:GitHub

data: make `downsample` required, but permit zero (#4333) Summary: Suggested by @nfelt on #4314. This hits a flexibility/usability middle ground between “omitted messages are implied to be empty” (which makes it easy to accidentally set `num_points: 0` and wonder why the result is empty) and “you may not set `num_points: 0`” (which excludes legitimate use cases). Test Plan: None needed; this is just a doc update for now, as the service does not yet have any clients or servers. wchargin-branch: data-downsample-required

Commit:4462241
Author:William Chargin
Committer:GitHub

data: add `DataProvider` gRPC service spec (#4314) Summary: This patch adds a gRPC interface that mirrors the `DataProvider` Python interface. The intent is to define a `GrpcDataProvider` Python type that is backed by a gRPC service. The particular service definition here is what I’ve used in drafts of data provider implementations [in Go][go] and [in Rust][rust], so I can attest that it’s at least mostly reasonable on both the client and the server. It’s mostly a one-to-one port of the Python interface and admits an easy Python client. [go]: https://github.com/wchargin/tensorboard-data-server [rust]: https://github.com/wchargin/rustboard-server Test Plan: It builds: `bazel build //tensorboard/data/proto:protos_all_py_pb2_grpc`. wchargin-branch: data-provider-proto

Commit:da59d4c
Author:William Chargin
Committer:GitHub

rust: add demo Tonic server (#4318) Summary: This patch defines a simple “demo” service with one RPC, which adds a sequence of numbers. It includes a Tonic server for this service to demonstrate the end-to-end setup. Test Plan: In one shell, run `bazel run -c opt //tensorboard/data/server`. In another shell, use `grpc_cli` to send an RPC to localhost port 6789: ``` grpc_cli --channel_creds_type=insecure \ --protofiles tensorboard/data/server/demo.proto \ call localhost:6789 Demo.Add "term: 2 term: 2" ``` This should print a response like `sum: 4`. On my machine, it completes in 5.2 ± 2.6 ms (methodology: run `hyperfine` on the above command). This seems reasonably fast given that it has to establish a connection, whereas a Python gRPC client will keep a connection open. It’s also well below the 40ms magic number for `TCP_NODELAY` issues. wchargin-branch: rust-demo-tonic-server

Commit:5ecced3
Author:E
Committer:GitHub

Update protos for TF v2.4.0-rc1 (#4304) While testing the pip candidate locally for 2.4, it fails on the proto_test. This change was created by checking out git checkout v2.4.0-rc1 in a TensorFlow local repo, and running `./tensorboard/compat/proto/update.sh PATH_TO_TENSORFLOW_REPO`.

Commit:3010b2c
Author:William Chargin
Committer:GitHub

proto: sync to TensorFlow e1a7896aaf93 (2020-09-14) (#4163) Summary: This pulls in routine updates since v2.3.0-rc2, plus new `go_package` directives just added in: <https://github.com/tensorflow/tensorflow/commit/e1a7896aaf934632eceefaaedb3f4fb9b91cfd67> Test Plan: The `proto_test` actually still fails with today’s nightly, because some changes are fresh in TensorFlow head today. But the test diff, as rendered with #4162, shows only `go_package` additions, as expected. The test should pass on tomorrow’s `tf-nightly==2.4.0.dev20200915`. wchargin-branch: proto-sync-e1a7896aaf93

Commit:a64659b
Author:E
Committer:GitHub

Update protos for TF v2.3.0-rc2 (#3862) This updates TensorBoard protos to match those of TensorFlow v2.3.0-rc2 to make the compat/proto_test pass. This change was generated by expanding `update.sh`'s replacement pattern, and running `git checkout v2.3.0-rc2` in a TensorFlow local repo, and running `./tensorboard/compat/proto/update.sh PATH_TO_TENSORFLOW_REPO`.

Commit:b79c4fc
Author:TensorBoard Gardener

Integrate a64659b4a17001c6ac860153139be14d760ddb4a

Commit:1e7a722
Author:Alex Bäuerle
Committer:GitHub

nPMI Plugin Backend (experimental) (#3799) * Added nPMI Plugin Backend Files Added files for the nPMI plugin backend. This is not yet completely working. * Added Comment to Active Function Commented the isActive function of the plugin. * Fix Plugin Test File The test file did not work correctly as of some indentation issue. THis has been fixed now. * Summary Tests Added tests for the summary writer. * Corrected Indentation for Main File Corrected the indentation for the main plugin backend file. * Added Missing License Added a missing license disclaimer to the summary test file. * More Indentation Correction Corrected more of the indentation issues in the python files. * Further Linter Issues Fixed further linter issues. * Linting Correction Further Linting Correction * Further Linting More linting changes. * Fixed BUILD File Fixed linter error for BUILD file. * Fixed Test Issue on TF 1.15 Fixed failing tests because of wrong imports on TF 1.15. * Rename NPMI to Npmi in Classes For consistency, this renaming has been done. * Missed Rename Missed renaming in one place. * More Explanation for the Plugin Added more explanation for the plugin with a readme and one more comment. * Removed Keys and added Comments Removed some keys that are not really needed for the plugin and added more comments to clarify routes. * Build Deps Cleaned Cleaned up a little in the build deps. * Removed Unused Imports Had several imports that were not used. Removed them now. * Added Dependency Added a missing dependency for the plugin. * Changed dependencies to actual Import Changed dependecies of the summary writer to the actual imports of this file. * Restructured Summary Writer Summary writer now with three different functions for the routes. * Fixed Linter Issues After this restructure, I introduced some linter issues, which are now fixed. * Added Safe JSON Encoder Added a encoder that safely handles numpy nan, inf, and -inf values. * Linting Fixes After adding the SafeEncoder, needed some linter fixes. * Final Linter Change One more Linter change for the safe encoder. * Added Proto for Metadata Added aproto for the metadata field. * Metadata is Own File Now, the metadata is its own file. This had to be done for the metadata content is a proto now and thus metadata is accessed at different places. * Begin data_provider Transition Beginning the transition from multiplexer to data_provider. * Extra Whitespace Bug Removed an extra whitespace where there should not be one. * Plugin Name from Metadata Now getting the plugin name from metadata to avoid duplication. * Fix BUILD File Some fixes in the BUILD file for the CL. * Added Version Field Added a version field to the metadata. * Switch from Multiplexer to Data Provider Switched from multiplexter to using data provider for serving data. * Linting Corrected linting issues. * More Linting More linting corrections. * Removed Safe Encoder for Parse Function Now parsing the values that we get from a tensor and converting nan to None. * Fixed Linting in Convert Function The convert function now is correctly formatted as well. * Minor Comment Changes Changed some comments for better explanation. * Removed Table Field Removed the table field from all the data. * Linting Fix Fixed a minor linter error. * Moved To Correct Directory The backend has been moved to the tensorboard/plugins directory. * Build Lint Fix * Fixed Wrong Import The default.py was still using the old path. Co-authored-by: Alex Bäuerle <abauerle@google.com>

Commit:45f714c
Author:TensorBoard Gardener

Integrate 1e7a722c49829351235308d8fdc31e28d4a04383

Commit:b49c6bf
Author:Brian Dubois
Committer:Shanqing Cai

Expand UploadLimits proto and use it in TensorBoardUploader (#3625) Motivation for features / changes There are a number of new parameters that impact the throughput of uploading data that we want the TensorBoard.dev frontend to return to the uploader in the handshake. Each of scalar, tensor, and blob now have a max_[scalars|tensor|blob]_request_size and min_[scalars|tensor|blob]_request_interval parameters. Technical description of changes Update the UploadLimits proto to include the new fields. Allow an instance of UploadLimits to be passed to TensorBoardUploader and use this throughout the code instead of the constants that have been used. None of the clients for TensorBoardUploader are yet updated to pass UploadLimits (there is one in this git repo and there is one internal to Google) so temporarily have TensorBoardUploader construct its own UploadLimits if None is passed, where values are all based on now-deprecated constants and arguments to the constructor. We hope to update clients in the next day or two so this is intended to be very temporary. Tests have been updated to use UploadLimits as arguments exclusively.

Commit:96c198b
Author:Brian Dubois
Committer:Shanqing Cai

Uploader uses entire UploadLimits from handshake. (#3631) Continue work to get upload-related parameters from the frontend handshake. The logic in uploader_subcommand reads UploadLimits from the handshake and passes it to the TensorBoardUploader. If any individual field in UploadLimits is missing, use a reasonable default. With this and some Google-internal code cleanup we can now delete the deprecated fields in TensorBoardUploader.

Commit:47d4b99
Author:Brian Dubois
Committer:Shanqing Cai

Allow server to specify maximum tensor upload size (#3575) Backends should be able to impose size limits for uploading individual tensor points, similar to how they can impose size limits for individual blobs. Fallback to a reasonable default value for the time being while we wait for backends to be updated to provide this information. If a Tensor point larger than the maximum size is encountered, strip it from write requests and log a warning.

Commit:83bde95
Author:Shanqing Cai
Committer:Shanqing Cai

uploader: add rpc method GetExperiment to ExporterService (#3614) * Motivation for features / changes * In the experimental `ExperimentFromDev` class, support getting metadata including experiment name and description. * In the `ExperimentalFromDev.get_scalars()`, support progress indicaor. * Technical description of changes * Add non-streaming rpc method `GetExperiment()` to `ExporterService`. * Alternate designs / implementations considered * Add `Experiment` as a one-of response data type to `StreamExperimentDataResponse` * Con: In the current takeout paradigm, this leads to duplicate information. * Con: Wasteful when only the `Experiment` (metadata) is needed and all the scalars, tensor and blob sequences are not needed.

Commit:278df26
Author:William Chargin
Committer:Shanqing Cai

audio: deprecate labels and stop reading them (#3500) Summary: Most versions of the audio summary APIs only support writing audio data, but the TB 1.x `tensorboard.summary.audio` ops also support attaching a “label” (Markdown text) to each audio clip. This feature is not present in any version of the TensorFlow summary APIs, including `tf.contrib`, and is not present in the TB 2.x `tensorboard.summary.audio` API. It hasn’t been widely adopted, and doesn’t integrate nicely in its current form with upcoming efforts like migrating the audio dashboard to use the generic data APIs. This commit causes labels to no longer be read by TensorBoard. Labels are still written by the TB 1.x summary ops if specified, but now print a warning. Because we do not change the write paths, it is important that `audio.metadata` *not* set `DATA_CLASS_BLOB_SEQUENCE`, because otherwise audio summaries will not be touched by `dataclass_compat`. Tests for `dataclass_compat` cover this. We don’t think that this feature has wide adoption. If this change gets significant pushback, we can look into restoring labels with a different implementation, likely as a parallel tensor time series. Closes #3513. Test Plan: Unit tests updated. As a manual test, TensorBoard still works on both legacy audio data and audio data in the new form (that’s not actually written to disk yet), and simply does not display any labels. wchargin-branch: audio-no-labels

Commit:2daedd9
Author:TensorBoard Gardener

Integrate ab70ddfd325f5f99248eb3c5ce066c42e44b55f7

Commit:ab70ddf
Author:Shanqing Cai
Committer:GitHub

Revert "uploader: add rpc method GetExperiment to ExporterService (#3614)" (#3659) This reverts commit 2067b2569f72edca0979f81a7267b18e7a56e2e2. Reverts tensorflow/tensorboard#3614 to resolve internal proto service method name duplication. Will roll forward with fix later.

Commit:fae2da2
Author:TensorBoard Gardener

Integrate 2067b2569f72edca0979f81a7267b18e7a56e2e2

Commit:2067b25
Author:Shanqing Cai
Committer:GitHub

uploader: add rpc method GetExperiment to ExporterService (#3614) * Motivation for features / changes * In the experimental `ExperimentFromDev` class, support getting metadata including experiment name and description. * In the `ExperimentalFromDev.get_scalars()`, support progress indicaor. * Technical description of changes * Add non-streaming rpc method `GetExperiment()` to `ExporterService`. * Alternate designs / implementations considered * Add `Experiment` as a one-of response data type to `StreamExperimentDataResponse` * Con: In the current takeout paradigm, this leads to duplicate information. * Con: Wasteful when only the `Experiment` (metadata) is needed and all the scalars, tensor and blob sequences are not needed.

Commit:3543b23
Author:TensorBoard Gardener

Integrate 8ff3be67f64d66cd86841303b14e457bb4f28556

Commit:8ff3be6
Author:Brian Dubois
Committer:GitHub

Uploader uses entire UploadLimits from handshake. (#3631) Continue work to get upload-related parameters from the frontend handshake. The logic in uploader_subcommand reads UploadLimits from the handshake and passes it to the TensorBoardUploader. If any individual field in UploadLimits is missing, use a reasonable default. With this and some Google-internal code cleanup we can now delete the deprecated fields in TensorBoardUploader.

Commit:159ce63
Author:TensorBoard Gardener

Integrate 4188fcefa3d89f9ad133896418b82bd0ce99e33a

Commit:4188fce
Author:Brian Dubois
Committer:GitHub

Expand UploadLimits proto and use it in TensorBoardUploader (#3625) Motivation for features / changes There are a number of new parameters that impact the throughput of uploading data that we want the TensorBoard.dev frontend to return to the uploader in the handshake. Each of scalar, tensor, and blob now have a max_[scalars|tensor|blob]_request_size and min_[scalars|tensor|blob]_request_interval parameters. Technical description of changes Update the UploadLimits proto to include the new fields. Allow an instance of UploadLimits to be passed to TensorBoardUploader and use this throughout the code instead of the constants that have been used. None of the clients for TensorBoardUploader are yet updated to pass UploadLimits (there is one in this git repo and there is one internal to Google) so temporarily have TensorBoardUploader construct its own UploadLimits if None is passed, where values are all based on now-deprecated constants and arguments to the constructor. We hope to update clients in the next day or two so this is intended to be very temporary. Tests have been updated to use UploadLimits as arguments exclusively.

Commit:35e31e8
Author:TensorBoard Gardener

Integrate cb2738b238a0e286f962364e8269649fb80a4e8b

Commit:cb2738b
Author:Brian Dubois
Committer:GitHub

Allow server to specify maximum tensor upload size (#3575) Backends should be able to impose size limits for uploading individual tensor points, similar to how they can impose size limits for individual blobs. Fallback to a reasonable default value for the time being while we wait for backends to be updated to provide this information. If a Tensor point larger than the maximum size is encountered, strip it from write requests and log a warning.

Commit:e1c7cb8
Author:TensorBoard Gardener

Integrate 0fae2f3f870ccc93330a3110a05ac9ec97de06b2

Commit:0fae2f3
Author:William Chargin
Committer:GitHub

audio: deprecate labels and stop reading them (#3500) Summary: Most versions of the audio summary APIs only support writing audio data, but the TB 1.x `tensorboard.summary.audio` ops also support attaching a “label” (Markdown text) to each audio clip. This feature is not present in any version of the TensorFlow summary APIs, including `tf.contrib`, and is not present in the TB 2.x `tensorboard.summary.audio` API. It hasn’t been widely adopted, and doesn’t integrate nicely in its current form with upcoming efforts like migrating the audio dashboard to use the generic data APIs. This commit causes labels to no longer be read by TensorBoard. Labels are still written by the TB 1.x summary ops if specified, but now print a warning. Because we do not change the write paths, it is important that `audio.metadata` *not* set `DATA_CLASS_BLOB_SEQUENCE`, because otherwise audio summaries will not be touched by `dataclass_compat`. Tests for `dataclass_compat` cover this. We don’t think that this feature has wide adoption. If this change gets significant pushback, we can look into restoring labels with a different implementation, likely as a parallel tensor time series. Closes #3513. Test Plan: Unit tests updated. As a manual test, TensorBoard still works on both legacy audio data and audio data in the new form (that’s not actually written to disk yet), and simply does not display any labels. wchargin-branch: audio-no-labels

Commit:88c246a
Author:Stanley Bileschi
Committer:Stanley Bileschi

proto: sync to TensorFlow v2.2.0rc3

Commit:44192ab
Author:Shanqing Cai
Committer:Stanley Bileschi

Uploader: Add field mask for total_blob_bytes (#3467) * Motivation for features / changes * Add the field mask for the `total_blob_bytes` field added in #3448

Commit:1a2f93d
Author:David Soergel
Committer:Stanley Bileschi

Provide upload limits in ServerInfo, and skip uploading blobs that are too large (#3453) Introduces a new `UploadLimits` submessage within `ServerInfoResponse`, to allow the server to advise the uploader client about limits that it should honor (e.g. things like data sizes, upload rates, and so on). The only limit actually supported is `max_blob_size`. The uploader uses this value to skip sending large blobs that the server would end up rejecting anyway.

Commit:3954eea
Author:Shanqing Cai
Committer:Stanley Bileschi

Uploader: Add `total_blob_bytes` field to `Experiment` proto (#3448) * Motivation for features / changes * Towards fixing b/152749189: displaying the number of blob bytes that an experiment contains int the `tensorboard dev list` command output * Technical description of changes * Add a `total_blob_bytes` field to the `Experiment` proto, which is the respnose message type of `StreamExperiemnts`

Commit:9d21bd6
Author:Brian Dubois
Committer:Stanley Bileschi

Add --plugins option to uploader (#3402) Motivation for features / changes Allow uploader users to specify the plugins for which data should be uploaded. It has two-fold purpose: (1) Allow users to specify "experimental" plugins that would otherwise not be uploaded. (2) Allow users to list a subset of launched plugins, if they do not want to upload all launched plugin data. Technical description of changes Add "--plugins" command line option for upload subcommand. The information is sent in the ServerInfoRequest (aka the handshake) and may impact the list of plugins returned in ServerInfoResponse.plugin_control.allowed_plugins. Sample usage: tensorboard dev upload --logdir --plugins scalars graphs histograms Outside of these specific changes: It's expected that supported servers will evaluate the list of plugins and decide whether it is valid. If valid, the server will respond with the entire list of plugins or perhaps a sublist. If the list is invalid then it will respond with a CompatibilityVerdict of VERDICT_ERROR and a useful detail message to print to console. If --plugins is not specified then the server is expected to respond with a default list of plugins to upload.

Commit:4d3ff3e
Author:ericdnielsen
Committer:Stanley Bileschi

Include the size of the blob in the WriteBlobRequest (#3457) * Report the final size of a blob when its uploaded to allow for quota allocation and enforcement.

Commit:978ad14
Author:TensorBoard Gardener

Integrate 2ceffb4496a2ba1edd6fa22724b55af680435273

Commit:2ceffb4
Author:Shanqing Cai
Committer:GitHub

Uploader: Add field mask for total_blob_bytes (#3467) * Motivation for features / changes * Add the field mask for the `total_blob_bytes` field added in #3448

Commit:fa6369d
Author:TensorBoard Gardener

Integrate 325c252455b1071e0a58ccc0899c664b00bda143

Commit:325c252
Author:David Soergel
Committer:GitHub

Provide upload limits in ServerInfo, and skip uploading blobs that are too large (#3453) Introduces a new `UploadLimits` submessage within `ServerInfoResponse`, to allow the server to advise the uploader client about limits that it should honor (e.g. things like data sizes, upload rates, and so on). The only limit actually supported is `max_blob_size`. The uploader uses this value to skip sending large blobs that the server would end up rejecting anyway.

Commit:489bfa2
Author:TensorBoard Gardener

Integrate c47b6aee3b4caefbdd7722f41823efe4965e5fd2

Commit:c47b6ae
Author:ericdnielsen
Committer:GitHub

Include the size of the blob in the WriteBlobRequest (#3457) * Report the final size of a blob when its uploaded to allow for quota allocation and enforcement.

Commit:fac7d36
Author:TensorBoard Gardener

Integrate 91fabf5729023efd15b1263a90208933975d986d

Commit:91fabf5
Author:Shanqing Cai
Committer:GitHub

Uploader: Add `total_blob_bytes` field to `Experiment` proto (#3448) * Motivation for features / changes * Towards fixing b/152749189: displaying the number of blob bytes that an experiment contains int the `tensorboard dev list` command output * Technical description of changes * Add a `total_blob_bytes` field to the `Experiment` proto, which is the respnose message type of `StreamExperiemnts`

Commit:3feab44
Author:TensorBoard Gardener

Integrate 3ba05e095dffc6aea675c322546ad36a6f1304ab

Commit:3ba05e0
Author:Brian Dubois
Committer:GitHub

Add --plugins option to uploader (#3402) Motivation for features / changes Allow uploader users to specify the plugins for which data should be uploaded. It has two-fold purpose: (1) Allow users to specify "experimental" plugins that would otherwise not be uploaded. (2) Allow users to list a subset of launched plugins, if they do not want to upload all launched plugin data. Technical description of changes Add "--plugins" command line option for upload subcommand. The information is sent in the ServerInfoRequest (aka the handshake) and may impact the list of plugins returned in ServerInfoResponse.plugin_control.allowed_plugins. Sample usage: tensorboard dev upload --logdir --plugins scalars graphs histograms Outside of these specific changes: It's expected that supported servers will evaluate the list of plugins and decide whether it is valid. If valid, the server will respond with the entire list of plugins or perhaps a sublist. If the list is invalid then it will respond with a CompatibilityVerdict of VERDICT_ERROR and a useful detail message to print to console. If --plugins is not specified then the server is expected to respond with a default list of plugins to upload.

Commit:c6b6ce3
Author:TensorBoard Gardener

Integrate 2b2a976b03777a5c6ac6456370ec8c01b80c35e9

Commit:2b2a976
Author:William Chargin
Committer:GitHub

Revert "Add --plugins option to uploader (#3377)" (#3400) This reverts commit 343456b5768f290b9d8bc8483fec954c0074f4b7. Test Plan: Running `bazel run //tensorboard -- dev list` now works. Previously, it failed with: ``` File ".../tensorboard/uploader/uploader_main.py", line 750, in _get_server_info server_info = server_info_lib.fetch_server_info(origin, flags.plugins) AttributeError: 'Namespace' object has no attribute 'plugins' ``` wchargin-branch: revert-uploader-plugins-flag

Commit:31d4bb4
Author:TensorBoard Gardener

Integrate 66ddc0f548579268a164b5a34b811e5e61101b4b

Commit:66ddc0f
Author:Stanley Bileschi
Committer:GitHub

proto: sync to TensorFlow v2.2.0-rc0 (#3392) Summary: These are synced to TensorFlow tag v2.2.0-rc0 which resolves to 09cf2e6d20 Test Plan: Running Running bazel test //tensorboard/compat/proto:proto_test now passes in a virtualenv with tensorflow==2.2.0rc0 installed.

Commit:20400b7
Author:TensorBoard Gardener

Integrate 3176491cfe2cf07d8282380a5de3e8d05e0912a0

Commit:3176491
Author:Shanqing Cai
Committer:GitHub

Revert "Revert "Add message types and rpc method to support exporting BLOBs (#3373)" (#3384)" (#3390) This reverts commit 93759282d4dab7f5b6f3d6899cee719d1249edc6. * Motivation for features / changes * go/tbpr/3373 had proto message type name clashes. It was rolled back to unblock sync'ing of other PRs. * Now that the name clash should have been addressed through internal code changes, we are rolling the PR forward. * Technical description of changes * This PR is an exact rollback of the rollback (go/tbpr/3384) * Alternate designs / implementations considered * As discussed elsewhere, the message type `Blob` can consist of a `string url = 3` field. But I refrained from adding it in this PR, to keep it simple. The field can be added in separate PRs.

Commit:15afbe9
Author:TensorBoard Gardener

Integrate 93759282d4dab7f5b6f3d6899cee719d1249edc6

Commit:9375928
Author:Shanqing Cai
Committer:GitHub

Revert "Add message types and rpc method to support exporting BLOBs (#3373)" (#3384) This reverts commit 2f83ec5ebf40bac5870a20faba10dbeb9ffae167. * Motivation for features / changes * Sync failed due to a name clash. Reverting the PR that introduce the clashing `BlobSequence` message type first. Will fix the internal duplication first.

Commit:fdf3add
Author:TensorBoard Gardener

Integrate 343456b5768f290b9d8bc8483fec954c0074f4b7

Commit:343456b
Author:Brian Dubois
Committer:GitHub

Add --plugins option to uploader (#3377) Allow uploader users to specify the plugins for which data should be uploaded. It has two-fold purpose: (1) Allow users to specify "experimental" plugins that would otherwise not be uploaded. (2) Allow users to list a subset of launched plugins, if they do not want to upload all launched plugin data. === Technical description of changes Add "--plugins" command line option for upload subcommand. The information is sent in the ServerInfoRequest (aka the handshake) and may impact the list of plugins returned in ServerInfoResponse.plugin_control.allowed_plugins. Sample usage: tensorboard dev upload --logdir --plugins scalars graphs histograms Outside of these specific changes: It's expected that supported servers will evaluate the list of plugins and decide whether it is valid. If valid, the server will respond with the entire list of plugins or perhaps a sublist. If the list is invalid then it will respond with a CompatibilityVerdict of VERDICT_ERROR and a useful detail message to print to console. If --plugins is not specified then the server is expected to respond with a default list of plugins to upload.

Commit:5c47d95
Author:TensorBoard Gardener

Integrate 2f83ec5ebf40bac5870a20faba10dbeb9ffae167

Commit:2f83ec5
Author:Shanqing Cai
Committer:GitHub

Add message types and rpc method to support exporting BLOBs (#3373) * Motivation for features / changes * Support tbdev uploader's exporting of BLOBs (e.g., GraphDefs) * Technical description of changes * Add the following message types to tensorboard/uploader/proto/blob.proto: * `Blob`, with fields blob_id and state (using the existing `BlobState` message type in the same file) * `BlobSequence`, which is composed of `Blob`s * In tensorboard/uploader/exporter_service.proto: * Add the following message type: `BlobSequencePoints`, this is in parallel to the existing `ScalarPoints` and `TensorPoints` message types in the same file. * Change the existing `points` (scalars), `tensors`, and to-be-added `blob_sequences` into a `oneof`. * Add the following rpc method to `TensorBoardExporterService`: `StreamBlobData`. Co-author: wchargin@

Commit:ca26d31
Author:TensorBoard Gardener

Integrate 3da17e60518b7635f5009bc03a4994d7dd4dbe45

Commit:3da17e6
Author:David Soergel
Committer:GitHub

Rename BlobState enum values (#3355) (#3370)

Commit:3c23d4b
Author:TensorBoard Gardener

Integrate b8b4122d513c92331811e67c3ef83d9c2fec5a41

Commit:b8b4122
Author:David Soergel
Committer:GitHub

Revert "Rename BlobState enum values (#3355)" (#3363) This reverts commit 89f5a6a72c599b3b44578d14f4c032c2a8f279ba.

Commit:43cadcc
Author:TensorBoard Gardener

Integrate 89f5a6a72c599b3b44578d14f4c032c2a8f279ba

Commit:89f5a6a
Author:David Soergel
Committer:GitHub

Rename BlobState enum values (#3355)

Commit:7814447
Author:TensorBoard Gardener

Integrate 83776f4633538ef88b488772b91b37d31bf23c26