Get desktop application:
View/edit binary Protocol Buffers messages
https://www.tensorflow.org/api_docs/python/tf/train/AdadeltaOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L68
Used in:
https://www.tensorflow.org/api_docs/python/tf/train/AdagradOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L151
Used in:
The Adam optimizer does not implement hyper-parameter update; use the dynamic learning rate feature instead, setting the learning rate to: user learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t) Here, t is the current timestep. https://www.tensorflow.org/api_docs/python/tf/train/AdamOptimizer https://github.com/tensorflow/tensorflow/blob/ab51450c817674c8ff08a7ae4f8ac50cdc4bed8b/tensorflow/python/training/adam.py#L54 Note that the code by default implements the lazy version of Adam (https://www.tensorflow.org/api_docs/python/tf/contrib/opt/LazyAdamOptimizer) unless the use_non_lazy_adam parameter is set, in which case it implements the normal version of Adam that updates all parameters in the embedding table, even for entries that are not used in the current minibatch (https://www.tensorflow.org/api_docs/python/tf/contrib/opt/AdamOptimizer). If use_non_lazy_adam is enabled, gradient accumulation is also required to be enabled in order to get correct results; a warning will be printed otherwise (which may change to an error in the future). If use_sum_inside_sqrt is set, the Adam variable update formula will be changed from m / (sqrt(v) + epsilon) to m / sqrt(v + epsilon**2); this option improves the performance of TPU training and is not expected to harm model quality.
Used in:
Algorithm in http://www.jmlr.org/papers/volume12/duchi11a/duchi11a.pdf.
Used in:
Whether to use the updated or the old value of the accumulator when computing the effective learning rate. When update_accumulator_first is set to True, the updated value of the accumulator is used.
The max_var_update value to use. Set value to 0 (default) to disable using max_var_update to clip the gradient.
The maximum value of the accumulator. Set max_accumulator to 0 (default) to disable using max_accumulator to clip the accumulator.
https://www.tensorflow.org/api_docs/python/tf/train/RMSPropOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L372
Used in:
Used in:
-inf if not set
+inf if not set
Describes the result of a TPU compilation.
The error message, if any, returned during compilation.
HLO proto.
Dynamic learning rate specification in the TPUEmbeddingConfiguration. The actual learning rates are provided as a scalar input list to the SendTPUEmbeddingGradients Op indexed by their tag specified through the following proto.
Used in:
For tables where learning rates are dynamically computed and communicated to the TPU embedding program, a tag must be specified for the learning rate. The tag must be a non-negative integer. The total number of unique tags must be less than or equal to the number of tables in the TPU embedding configuration (a table does not specify any tag if it uses a constant learning rate, and specifies exactly one tag if it uses dynamic learning rates). All tags in the range [0, number_of_unique_tags) must be present in the TPU embedding configuration, i.e. a tag cannot be skipped if a different tag numerically greater than it is used in the configuration. If multiple tables specify the same tag, they *MUST* have the same dynamic learning rate, for example, their dynamic learning rate could be computed by the same TensorFlow sub-graph. The partitioning of the embedding layer would be more optimal if the number_of_unique_tags is as *LOW* as possible, i.e., if many tables share the same tag. The learning_rate input of the SendTPUEmbeddingGradients op is used to communicate dynamic learning rates to the TPU embedding program. The learning_rate input is a list of scalars where the size of the list is equal to the number of unique tags. The learning rate associated with a particular tag is specified by populating its corresponding index in the list of learning_rate scalars.
https://www.tensorflow.org/api_docs/python/tf/train/FtrlOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L192
Used in:
Status of using gradient accumulation (doing two passes over the input gradients: one to accumulate them into a temporary array and another to apply them using the actual optimization algorithm). The extra message is to wrap the enum for scoping.
(message has no fields)
if UNSPECIFIED (default), gradient accumulation is ENABLED.
Used in:
Configuration proto for hot ID optimization. This is an experimental feature that is currently disabled (by default).
Used in:
Whether to enable or disable hot ID optimization. If UNSPECIFIED (default), hot ID optimization is DISABLED.
Used in:
Source of learning rate to use.
Used in:
Variant of algorithm in http://proceedings.mlr.press/v44/shamir15.pdf
Used in:
https://www.tensorflow.org/api_docs/python/tf/train/MomentumOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L271
Used in:
The online Yogi optimizer does not implement hyper-parameter update; use the dynamic learning rate feature instead, setting the learning rate to: user learning_rate * sqrt(1 - beta2^t) / (1 - beta1^t) Here, t is the current timestep. https://papers.nips.cc/paper/8186-adaptive-methods-for-nonconvex-optimization.pdf plus some extensions based on FTRL. Note that the code by default implements the lazy version of online Yogi.
Used in:
The L1 regularization parameter (used analogously to the one in FTRL).
The L2 regularization parameter (used analogously to the one in FTRL).
\beta_2 from Algorithm 2 in the paper.
Initial value of V variable in paper.
Initial value of linear variable in FTRL.
Activation to use to replace sign function in v_t update in Algorithm 2 of paper.
x -> copysign(1, x) (i.e., return 1 for an input of +0 rather than 0).
Used in:
(message has no fields)
x -> tanh(x * 10)
Used in:
(message has no fields)
Used in:
Learning rate used for updating the embedding layer parameters.
Limits to which to clip the weight values after the backward pass; not present means no limits are applied.
Limits to which to clip the backward pass gradient before using it for updates; not present means no limits are applied.
Amount of weight decay to apply; see weight_decay_optimizers.py for details. Almost all optimizers are supported with this option (MDL Adagrad Light does not work, and SGD does not behave as expected if it is enabled). Although there is no check, users who want weight decay will probably also want to enable gradient accumulation as well so that the decay will happen once per minibatch.
Status of using gradient accumulation (doing two passes over the input gradients: one to accumulate them into a temporary array and another to apply them using the actual optimization algorithm).
Configuration proto for hot ID replication. This is an experimental feature that is currently disabled (by default).
Optimization algorithm parameters; which field is selected determines which algorithm to use.
A mapping between the dynamic shape dimension of an input and the arg that represents the real shape.
Input arg index with dynamic shapes.
The dynamic shape dimension index.
The arg index that dynamic dimension maps to, which represents the value of the real shape.
https://www.tensorflow.org/api_docs/python/tf/train/ProximalAdagradOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L164
Used in:
https://www.tensorflow.org/api_docs/python/tf/train/RMSPropOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L356
Used in:
Specification of an optimization algorithm's state variables (both the main value vector and any extra accumulators, etc.). This proto is only used internally by the TPU software and is not exposed directly to the TF model.
Parameter name for the state variable.
Usage type of this state variable.
A state variable that should be filled with a constant and normally hidden from users (used for intermediate gradients being accumulated, for example).
Used in:
A normal state variable that should be saved and restored in checkpoints and used as an input or output to non-debug TensorFlow ops.
Used in:
For padding embedding rows, this field specifies the initial value to be used. Separate initial values need to be specified for the embeddings and any extra accumulators. The initial values should be specified so as to maintain two invariants during model training: (1) The embedding vector multiplied by zero returns a vector containing all zeros. To maintain this invariant, the embedding values should never be NaNs or +-infinity. (2) Repeatedly applying the optimizer using a gradient vector of all zeros does not cause the embeddings or slot variables to become NaNs or +-infinity. The padding row is looked up when no embedding IDs are present for a feature. The semantics of embedding lookup dictate that the output must be zero under this scenario.
https://www.tensorflow.org/api_docs/python/tf/train/GradientDescentOptimizer https://github.com/tensorflow/tensorflow/blob/c19e29306ce1777456b2dbb3a14f511edf7883a8/tensorflow/core/kernels/training_ops.cc#L423
Used in:
(message has no fields)
Number of samples in each batch of embedding layer activations sent to the TensorCore.
Number of TPU hosts used for inference/training.
Number of TensorCore used for inference/training.
This parameter determines if the execution of the sparse core will be pipelined with that of the TensorCore. This parameter only affects results when mode=TRAINING. If mode=INFERENCE or BACKWARD_PASS_ONLY, this parameter does not affect execution and hence, is a don't care value. false: The execution of the sparse core is not pipelined with that of the TensorCore. The forward pass of every step on the sparse core is executed only after the backward pass of the previous step is complete. And the backward pass on the sparse core is executed only after the embedding gradients have been computed on the TensorCore on every step. This ensures that the activations on every step observe the gradient updates from the previous step on both the sparse core and the TensorCore. true: The execution of the sparse core is pipelined with that of the TensorCore. The forward pass of every step on the sparse core can be executed after the forward pass of the previous step is complete without waiting for the backward pass. This improves the utilization of the sparse core allowing it to process step N+1 while the embedding gradients for step N are computed on the TensorCore. The backward pass of every step on the sparse core is executed directly after the forward pass for the next step is complete. The drawback is that embedding activations for step N+1 do not observe the embedding gradient updates from step N. This could affect model quality if step N and N+1 involve the same set of embedding IDs. However, since the embedding updates are sparse, this is generally not considered a problem.
Extended output layout information; if not provided, a compatibility mode will use defaults that match the old layout. Providing a value for this field is EXPERIMENTAL and most ways of filling it will probably break. Do not set it unless you know what you are doing.
Mode. Should the embedding layer program be run for inference (just forward pass), training (both forward and backward pass) or just the backward_pass.
Used in:
Sharding strategy of the embedding tables among the hosts. If the sharding_strategy is "mod", each id is assigned to host "id % num_hosts". For instance, 13 ids are split across 5 hosts as: [[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]. If the sharding_strategy is "div", ids are assigned to hosts in a contiguous manner. In this case, 13 ids are split across 5 hosts as: [[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]. In both the strategies, if the id space does not evenly divide the number of hosts, each of the first "table_descriptor.vocabulary_size % num_hosts" hosts will be assigned one more id. This partitioning strategy exactly follows that in the embedding_lookup TensorFlow function at tensorflow/python/ops/embedding_ops.py.
Used in:
Description of the various embedding tables.
Used in:
Name of the table.
Size of the vocabulary (i.e., number of rows) in the table.
The embedding dimension (i.e., the width of the embedding table).
Number of features mapped to this table.
Details of the learning algorithm used to update the embedding parameters.
Used in:
Output locations for each feature of each table.
Shape and layout information for each tensor.
Format information for a single output tensor.
Used in:
Description of the output placement for one feature.
Used in:
Typically, only one copy of each feature is used, but multiple are allowed and the same data will be copied to all of them (with the gradients summed in the backward pass).
Location of one copy of the feature's data.
Used in:
Which output tensor this copy of the feature will go into. Must be between 0 and layout.output_size().
Offset in dimension 0 for this feature copy. Must be between 0 and layout.output(tensor_index).dim0_size_per_sample().
Offset in dimension 1 for this feature copy. Must be between 0 and layout.output(tensor_index).dim1_size() - table width; repeated or partially/fully overlapping values are allowed and results in the same range will be summed (with the gradients replicated in the backward pass).
Description of the output placement for features of one table.
Used in:
Output locations for each feature loaded from this table.
Size and layout information for 2-D tensors.
Used in:
Multiplier for output dimension 0 size; used to match legacy format that stacks features within a sample in dimension 0.
The size (in dimension 1) of this output tensor.
Describes the geometry of a TPU mesh.
The dimensions of the TPU topology, in cores. Typically, this is a 3D topology [x, y, core], where the major dimensions correspond to TPU chips, and the minor dimension describes the number of cores on a multicore chip.
Number of TensorFlow tasks in the cluster.
Number of TPU devices per task.
A flattened rank 3 int32 array with shape [num_tasks, num_tpu_devices_per_task, len(mesh_shape)]. `tasks` is the number of tasks in the TPU cluster, `devices` is the number of TPU devices per task, and the minor dimension corresponds to a position in the TPU mesh topology. Each entry [task, device, axis] gives the `axis`-th coordinate in the topology of a task/device pair.