Get desktop application:
View/edit binary Protocol Buffers messages
Used in:
,Total number of bytes requested
Total number of bytes allocated if known
Name of the allocator used
Identifier of the allocated buffer if known
Set if this tensor only has one remaining reference
Address of the allocation.
An allocation/de-allocation operation performed by the allocator.
Used in:
The timestamp of the operation.
Number of bytes allocated, or de-allocated if negative.
Used in:
These are per-node allocator memory stats.
The bytes that are not deallocated.
The allocation and deallocation timeline.
These are snapshots of the overall allocator memory stats. The number of live bytes currently allocated by the allocator.
Used to specify and override the default API & behavior in the generated code for client languages, from what you would get from the OpDef alone. There will be a set of ApiDefs that are common to all client languages, and another set per client language. The per-client-language ApiDefs will inherit values from the common ApiDefs which it can either replace or modify. We separate the API definition from the OpDef so we can evolve the API while remaining backwards compatible when interpretting old graphs. Overrides go in an "api_def.pbtxt" file with a text-format ApiDefs message. WARNING: Be *very* careful changing the API for any existing op -- you can change the semantics of existing code. These changes may need to wait until a major release of TensorFlow to avoid breaking our compatibility promises.
Used in:
Name of the op (in the OpDef) to specify the API for.
If this op is deprecated, set deprecation message to the message that should be logged when this op is used. The message should indicate alternative op to use, if any.
Major version when the op will be deleted. For e.g. set this value to 2 if op API should be removed in TensorFlow 2.0 and deprecated in versions before that.
List of original in_arg names to specify new argument order. Length of arg_order should be either empty to keep current order or match size of in_arg.
One-line human-readable description of what the Op does.
Additional, longer human-readable description of what the Op does.
Modify an existing/inherited description by adding text to the beginning or end.
Used in:
Change the name used to access this arg in the API from what is used in the GraphDef. Note that these names in `backticks` will also be replaced in the summary & description fields.
Note: this will replace any inherited arg doc. There is no current way of modifying arg descriptions (other than replacing them entirely) as can be done with op descriptions.
Description of the graph-construction-time configuration of this Op. That is to say, this describes the attr fields that will be specified in the NodeDef.
Used in:
Change the name used to access this attr in the API from what is used in the GraphDef. Note that these names in `backticks` will also be replaced in the summary & description fields.
Specify a new default value to use for this attr. This default will be used when creating new graphs, as opposed to the default in the OpDef, which will be used when interpreting old GraphDefs.
Note: this will replace any inherited attr doc, there is no current way of modifying attr descriptions as can be done with op descriptions.
If you specify any endpoint, this will replace all of the inherited endpoints. The first endpoint should be the "canonical" endpoint, and should not be deprecated (unless all endpoints are deprecated).
Used in:
Name should be either like "CamelCaseName" or "Package.CamelCaseName". Client-language-specific ApiDefs may use a snake_case convention instead of CamelCase.
Set if this endpoint is deprecated. If set to true, a message suggesting to use a non-deprecated endpoint instead will be printed. If all endpoints are deprecated, set deprecation_message in ApiDef instead.
Major version when an endpoint will be deleted. For e.g. set this value to 2 if endpoint should be removed in TensorFlow 2.0 and deprecated in versions before that.
Used in:
Normally this is "VISIBLE" unless you are inheriting a different value from another ApiDef.
Publicly visible in the API.
Do not include this op in the generated API. If visibility is set to 'SKIP', other fields are ignored for this op.
Hide this op by putting it into an internal namespace (or whatever is appropriate in the target language).
Protocol buffer representing the value for an attr used to configure an Op. Comment indicates the corresponding attr type. Only the field matching the attr type may be filled.
Used in:
, , , , , ,"string"
"int"
"float"
"bool"
"type"
"shape"
"tensor"
any "list(...)"
"func" represents a function. func.name is a function's name or a primitive op's name. func.attr.first is the name of an attr defined for that function. func.attr.second is the value for that attr in the instantiation.
This is a placeholder only used in nodes defined inside a function. It indicates the attr value will be supplied when the function is instantiated. For example, let us suppose a node "N" in function "FN". "N" has an attr "A" with value placeholder = "foo". When FN is instantiated with attr "foo" set to "bar", the instantiated node N's attr A will have been given the value "bar".
LINT.IfChange
Used in:
"list(string)"
"list(int)"
"list(float)"
"list(bool)"
"list(type)"
"list(shape)"
"list(tensor)"
"list(attr)"
Total cost of this graph, typically used for balancing decisions.
Used in:
Aggregated cost value.
Aggregated cost dimension (e.g. 'memory', 'compute', 'network').
Used in:
The name of the node. Names are globally unique.
The device of the node. Can be empty if the node is mapped to the default partition or partitioning hasn't been run yet.
The id of the node. Node ids are only unique inside a partition.
Temporary memory used by this node.
Persistent memory used by this node.
Estimate of the computational cost of this node, in microseconds.
Analytical estimate of the computational cost of this node, in microseconds.
Analytical estimate of the memory access cost of this node, in microseconds.
If true, the output is permanent: it can't be discarded, because this node is part of the "final output". Nodes may depend on final nodes.
Ids of the control inputs for this node.
Are the costs inaccurate?
Inputs of this node. They must be executed before this node can be executed. An input is a particular output of another node, specified by the node id and the output index.
Used in:
Outputs of this node.
Used in:
If >= 0, the output is an alias of an input. Note that an alias input may itself be an alias. The algorithm will therefore need to follow those pointers.
Used in:
Unknown data class, used (implicitly) for legacy data. Will not be processed by data ingestion pipelines.
Scalar time series. Each `Value` for the corresponding tag must have `tensor` set to a rank-0 tensor of floating-point dtype, which will be converted to float64.
Tensor time series. Each `Value` for the corresponding tag must have `tensor` set. The tensor value is arbitrary, but should be small to accommodate direct storage in database backends: an upper bound of a few kilobytes is a reasonable rule of thumb.
Blob sequence time series. Each `Value` for the corresponding tag must have `tensor` set to a rank-1 tensor of bytestring dtype.
(== suppress_warning documentation-presence ==) LINT.IfChange
Used in:
, , , , , , , , , ,Not a legal value for DataType. Used to indicate a DataType field has not been set.
Data types that all computation devices are expected to be capable to support.
Single-precision complex
Quantized int8
Quantized uint8
Quantized int32
Float32 truncated to 16 bits. Only for cast ops.
Quantized int16
Quantized uint16
Double-precision complex
Arbitrary C++ data types
Do not use! These are only for parameters. Every enum above should have a corresponding value below (verified by types_test).
Fully specified name of the device within a cluster.
String representation of device_type.
Memory capacity of device in bytes.
Platform-specific data about device that may be useful for supporting efficient data transfers.
A device is assigned a global unique number each time it is initialized. "incarnation" should never be 0.
String representation of the physical device that this device maps to.
Used in:
Optional bus locality of device. Default value of 0 means no specific locality. Specific localities are indexed from 1.
Optional NUMA locality of device.
Optional local interconnect links to other devices.
Used in:
Its key is thread id.
A function can be instantiated when the runtime can bind every attr with a value. When a GraphDef has a call to a function, it must have binding for every attr defined in the signature. TODO(zhifengc): * device spec, etc.
Used in:
The definition of the function's name, arguments, return values, attrs etc.
Attributes specific to this function definition.
Unique IDs for each resource argument, used to track aliasing resources. If Argument A and Argument B alias each other, then resource_arg_unique_ids[A.index] == resource_arg_unique_ids[B.index]. If this field is empty, none of the arguments could alias; otherwise, every resource argument should have an entry in this field. When instantiated, the unique IDs will be attached to the _Arg nodes' "_resource_arg_unique_id" attribute.
By convention, "op" in node_def is resolved by consulting with a user-defined library first. If not resolved, "func" is assumed to be a builtin op.
A mapping from the output arg names from `signature` to the outputs from `node_def` that should be returned by the function.
A mapping from control output names from `signature` to node names in `node_def` which should be control outputs of this function.
Attributes for function arguments. These attributes are the same set of valid attributes as to _Arg nodes.
Used in:
A library is a set of named functions.
Used in:
GradientDef defines the gradient function of a function defined in a function library. A gradient function g (specified by gradient_func) for a function f (specified by function_name) must follow the following: The function 'f' must be a numerical function which takes N inputs and produces M outputs. Its gradient function 'g', which is a function taking N + M inputs and produces N outputs. I.e. if we have (y1, y2, ..., y_M) = f(x1, x2, ..., x_N), then, g is (dL/dx1, dL/dx2, ..., dL/dx_N) = g(x1, x2, ..., x_N, dL/dy1, dL/dy2, ..., dL/dy_M), where L is a scalar-value function of (x1, x2, ..., xN) (e.g., the loss function). dL/dx_i is the partial derivative of L with respect to x_i.
Used in:
The function name.
The gradient function's name.
Represents the graph of operations
Used in:
Compatibility versions of the graph. See core/public/version.h for version history. The GraphDef version is distinct from the TensorFlow version, and each release of TensorFlow will support a range of GraphDef versions.
Deprecated single version field; use versions above instead. Since all GraphDef changes before "versions" was introduced were forward compatible, this field is entirely ignored.
EXPERIMENTAL. DO NOT USE OR DEPEND ON THIS YET. "library" provides user-defined functions. Naming: * library.function.name are in a flat namespace. NOTE: We may need to change it to be hierarchical to support different orgs. E.g., { "/google/nn", { ... }}, { "/google/vision", { ... }} { "/org_foo/module_bar", { ... }} map<string, FunctionDefLib> named_lib; * If node[i].op is the name of one function in "library", node[i] is deemed as a function call. Otherwise, node[i].op must be a primitive operation supported by the runtime. Function call semantics: * The callee may start execution as soon as some of its inputs are ready. The caller may want to use Tuple() mechanism to ensure all inputs are ready in the same time. * The consumer of return values may start executing as soon as the return values the consumer depends on are ready. The consumer may want to use Tuple() mechanism to ensure the consumer does not start until all return values of the callee function are ready.
Used in:
Used in:
Used in:
Protocol buffer representing a handle to a tensorflow resource. Handles are not valid across executions, but can be serialized back and forth from within a single run.
Input Node parameters of transferred graph
Destination of graph transfer
Used in:
Used in:
Used in:
Used in:
Used in:
Serialization format for histogram module in core/lib/histogram/histogram.h
Used in:
Parallel arrays encoding the bucket boundaries and the bucket values. bucket(i) is the count for the bucket i. The range for a bucket is: i == 0: -DBL_MAX .. bucket_limit(0) i != 0: bucket_limit(i-1) .. bucket_limit(i)
Used in:
Used in:
Must match the name of an Op.
Type of device this kernel runs on.
Names of the Op's input_/output_args that reside in host memory instead of device memory.
This allows experimental kernels to be registered for an op that won't be used unless the user specifies a "_kernel" attr with value matching this.
Prioritization of kernel amongst different devices. By default we assume priority is 0. The higher the priority the better. By default (i.e. if this is not set), we prefer GPU kernels over CPU.
Used in:
Name of an attr from the Op.
A list of values that this kernel supports for this attr. Like OpDef.AttrDef.allowed_values, except for kernels instead of Ops.
A collection of KernelDefs
Used in:
Process-unique step id.
Name of the operation making the allocation.
Number of bytes in the allocation.
Address of the allocation.
Id of the tensor buffer being allocated, used to match to a corresponding deallocation.
Name of the allocator used.
Process-unique step id.
Name of the operation making the deallocation.
Id of the tensor buffer being deallocated, used to match to a corresponding allocation.
Name of the allocator used.
True if the deallocation is queued and will be performed later, e.g. for GPU lazy freeing of buffers.
Process-unique step id.
Handle describing the feeds and fetches of the step.
Process-unique step id.
Name of the kernel making the allocation as set in GraphDef, e.g., "affine2/weights/Assign".
Allocated tensor details.
Id of the tensor buffer being deallocated, used to match to a corresponding allocation.
Name of the allocator used.
Process-unique step id.
Name of the kernel producing an output as set in GraphDef, e.g., "affine2/weights/Assign".
Index of the output being set.
Output tensor details.
For memory tracking.
Used in:
A list of attr names and their values. The whole list is attached with a string name. E.g., MatMul[T=float].
Used in:
,Used in:
,The name given to this operator. Used for naming inputs, logging, visualization, etc. Unique within a single GraphDef. Must match the regexp "[A-Za-z0-9.][A-Za-z0-9_>./]*".
The operation name. There may be custom parameters in attrs. Op names starting with an underscore are reserved for internal use.
Each input is "node:src_output" with "node" being a string name and "src_output" indicating which output tensor to use from "node". If "src_output" is 0 the ":0" suffix can be omitted. Regular inputs may optionally be followed by control inputs that have the format "^node".
A (possibly partial) specification for the device on which this node should be placed. The expected syntax for this string is as follows: DEVICE_SPEC ::= PARTIAL_SPEC PARTIAL_SPEC ::= ("/" CONSTRAINT) * CONSTRAINT ::= ("job:" JOB_NAME) | ("replica:" [1-9][0-9]*) | ("task:" [1-9][0-9]*) | ("device:" [A-Za-z]* ":" ([1-9][0-9]* | "*") ) Valid values for this string include: * "/job:worker/replica:0/task:1/device:GPU:3" (full specification) * "/job:worker/device:GPU:3" (partial specification) * "" (no specification) If the constraints do not resolve to a single device (or if this field is empty or not present), the runtime will attempt to choose a device automatically.
Operation-specific graph-construction-time configuration. Note that this should include all attrs defined in the corresponding OpDef, including those with a value matching the default -- this allows the default to change and makes NodeDefs easier to interpret on their own. However, if an attr with a default is not specified in this list, the default will be used. The "names" (keys) must match the regexp "[a-z][a-z0-9_]+" (and one of the names from the corresponding OpDef's attr field). The values must have a type matching the corresponding OpDef attr's type field. TODO(josh11b): Add some examples here showing best practices.
This stores debug information associated with the node.
Used in:
Opaque string inserted into error messages created by the runtime. This is intended to store the list of names of the nodes from the original graph that this node was derived. For example if this node, say C, was result of a fusion of 2 nodes A and B, then 'original_node' would be {A, B}. This information can be used to map errors originating at the current node to some top level source code.
This is intended to store the list of names of the functions from the original graph that this node was derived. For example if this node, say C, was result of a fusion of node A in function FA and node B in function FB, then `original_funcs` would be {FA, FB}. If the node is in the top level graph, the `original_func` is empty. This information, with the `original_node_names` can be used to map errors originating at the current ndoe to some top level source code.
Time/size stats recorded for a single execution of a graph node.
Used in:
TODO(tucker): Use some more compact form of node identity than the full string name. Either all processes should agree on a global id (cost_id?) for each node, or we should use a hash of the name.
Output sizes recorded for a single execution of a graph node.
Used in:
Defines an operation. A NodeDef in a GraphDef specifies an Op by using the "op" field which should match the name of a OpDef. LINT.IfChange
Used in:
,Op names starting with an underscore are reserved for internal use. Names should be CamelCase and match the regexp "[A-Z][a-zA-Z0-9>_]*".
Description of the input(s).
Description of the output(s).
Named control outputs for this operation. Useful only for composite operations (i.e. functions) which want to name different control outputs.
Optional deprecation based on GraphDef versions.
One-line human-readable description of what the Op does.
Additional, longer human-readable description of what the Op does.
True if the operation is commutative ("op(a,b) == op(b,a)" for all inputs)
If is_aggregate is true, then this operation accepts N >= 2 inputs and produces 1 output all of the same type. Should be associative and commutative, and produce output with the same shape as the input. The optimizer may replace an aggregate op taking input from multiple devices with a tree of aggregate ops that aggregate locally within each device (and possibly within groups of nearby devices) before communicating. TODO(josh11b): Implement that optimization.
for things like add
Ops are marked as stateful if their behavior depends on some state beyond their input tensors (e.g. variable reading op) or if they have a side-effect (e.g. printing or asserting ops). Equivalently, stateless ops must always produce the same output for the same input and have no side-effects. By default Ops may be moved between devices. Stateful ops should either not be moved, or should only be moved if that state can also be moved (e.g. via some sort of save / restore). Stateful ops are guaranteed to never be optimized away by Common Subexpression Elimination (CSE).
for things like variables, queue
By default, all inputs to an Op must be initialized Tensors. Ops that may initialize tensors for the first time should set this field to true, to allow the Op to take an uninitialized Tensor as input.
for Assign, etc.
For describing inputs and outputs.
Used in:
Name for the input/output. Should match the regexp "[a-z][a-z0-9_]*".
Human readable description.
Describes the type of one or more tensors that are accepted/produced by this input/output arg. The only legal combinations are: * For a single tensor: either the "type" field is set or the "type_attr" field is set to the name of an attr with type "type". * For a sequence of tensors with the same type: the "number_attr" field will be set to the name of an attr with type "int", and either the "type" or "type_attr" field will be set as for single tensors. * For a sequence of tensors, the "type_list_attr" field will be set to the name of an attr with type "list(type)".
if specified, attr must have type "type"
if specified, attr must have type "int"
If specified, attr must have type "list(type)", and none of type, type_attr, and number_attr may be specified.
For inputs: if true, the inputs are required to be refs. By default, inputs can be either refs or non-refs. For outputs: if true, outputs are refs, otherwise they are not.
Description of the graph-construction-time configuration of this Op. That is to say, this describes the attr fields that will be specified in the NodeDef.
Used in:
A descriptive name for the argument. May be used, e.g. by the Python client, as a keyword argument name, and so should match the regexp "[a-z][a-z0-9_]+".
One of the type names from attr_value.proto ("string", "list(string)", "int", etc.).
A reasonable default for this attribute if the user does not supply a value. If not specified, the user must supply a value.
Human-readable description.
For type == "int", this is a minimum value. For "list(___)" types, this is the minimum length.
The set of allowed values. Has type that is the "list" version of the "type" field above (uses the "list" field of AttrValue). If type == "type" or "list(type)" above, then the "type" field of "allowed_values.list" has the set of allowed DataTypes. If type == "string" or "list(string)", then the "s" field of "allowed_values.list" has the set of allowed strings.
Information about version-dependent deprecation of an op
Used in:
First GraphDef version at which the op is disallowed.
Explanation of why it was deprecated and what to use instead.
A collection of OpDefs
For serializing and restoring the state of ReaderBase, see reader_base.h for details.
Protocol buffer representing a handle to a tensorflow resource. Handles are not valid across executions, but can be serialized back and forth from within a single run.
Definition of remote graph
Remote fused graph input node name
Remote fused graph output node name
Executor's name
Optional: Parameters given to the executor
Optional: Default graph input tensor shape used to allocate memory before executing op
Optional: Default graph input tensor shape used to allocate memory before executing op TODO(satok): Remote output tensor shape once shape information is stored in NodeDef
Used in:
Protocol buffer representing a handle to a tensorflow resource. Handles are not valid across executions, but can be serialized back and forth from within a single run.
Used in:
Unique name for the device containing the resource.
Container in which this resource is placed.
Unique name of this resource.
Hash code for the type of the resource. Is only valid in the same device and in the same execution.
For debug-only, the name of the type pointed to by this handle, if available.
Data types and shapes for the underlying resource.
Protocol buffer representing a pair of (data type, tensor shape).
Used in:
Used in:
Name of the full variable of which this is a slice.
Shape of the full variable.
Offset of this variable into the full variable.
Shape of this variable.
For identifying the underlying type of a variant. For variants, the types listed here are a subset of the types in the variant type registry, corresponding to commonly used variants which must occasionally be special-cased.
Invalid/unknown specialized type.
"tensorflow::TensorList" in the variant type registry.
A Summary is a set of named values to be displayed by the visualizer. Summaries are produced regularly during training, as controlled by the "summary_interval_secs" attribute of the training operation. Summaries are also produced at the end of an evaluation.
Set of values for the summary.
Used in:
Sample rate of the audio in Hz.
Number of channels of audio.
Length of the audio in frames (samples per channel).
Encoded audio data and its associated RFC 2045 content type (e.g. "audio/wav").
Used in:
Dimensions of the image.
Valid colorspace values are 1 - grayscale 2 - grayscale + alpha 3 - RGB 4 - RGBA 5 - DIGITAL_YUV 6 - BGRA
Image data in encoded format. All image formats supported by image_codec::CoderUtil can be stored here.
Used in:
This field is deprecated and will not be set.
Tag name for the data. Used by TensorBoard plugins to organize data. Tags are often organized by scope (which contains slashes to convey hierarchy). For example: foo/bar/0
Contains metadata on the summary value such as which plugins may use it. Take note that many summary values may lack a metadata field. This is because the FileWriter only keeps a metadata object on the first summary value with a certain tag for each tag. TensorBoard then remembers which tags are associated with which plugins. This saves space.
Value associated with the tag.
Metadata associated with a series of Summary data
Hint on how plugins should process the data in this series. Supported values include "scalar", "histogram", "image", "audio"
A SummaryMetadata encapsulates information on which plugins are able to make use of a certain summary value.
Used in:
Data that associates a summary with a certain plugin.
Display name for viewing in TensorBoard.
Longform readable description of the summary sequence. Markdown supported.
Class of data stored in this time series. Required for compatibility with TensorBoard's generic data facilities (`DataProvider`, et al.). This value imposes constraints on the dtype and shape of the corresponding tensor values. See `DataClass` docs for details.
Used in:
The name of the plugin this data pertains to.
The content to store for the plugin. The best practice is for this to be a binary serialized protocol buffer.
Used in:
, ,Data type of tensor elements
Shape of the tensor.
Information about the size and allocator used for the data
Protocol buffer representing a tensor.
Used in:
, , ,Shape of the tensor. TODO(touts): sort out the 0-rank issues.
Version number. In version 0, if the "repeated xxx" representations contain only one element, that element is repeated to fill the shape. This makes it easy to represent a constant Tensor with a single value.
Serialized raw tensor content from either Tensor::AsProtoTensorContent or memcpy in tensorflow::grpc::EncodeTensorToByteBuffer. This representation can be used for all tensor types. The purpose of this representation is to reduce serialization overhead during RPC call by avoiding serialization of many repeated small items.
DT_HALF, DT_BFLOAT16. Note that since protobuf has no int16 type, we'll have some pointless zero padding for each value here.
DT_FLOAT.
DT_DOUBLE.
DT_INT32, DT_INT16, DT_INT8, DT_UINT8.
DT_STRING
DT_COMPLEX64. scomplex_val(2*i) and scomplex_val(2*i+1) are real and imaginary parts of i-th single precision complex.
DT_INT64
DT_BOOL
DT_COMPLEX128. dcomplex_val(2*i) and dcomplex_val(2*i+1) are real and imaginary parts of i-th double precision complex.
DT_RESOURCE
DT_VARIANT
DT_UINT32
DT_UINT64
Dimensions of a tensor.
Used in:
, , , , , ,Dimensions of the tensor, such as {"input", 30}, {"output", 40} for a 30 x 40 2D tensor. If an entry has size -1, this corresponds to a dimension of unknown size. The names are optional. The order of entries in "dim" matters: It indicates the layout of the values in the tensor in-memory representation. The first entry in "dim" is the outermost dimension used to layout the values, the last entry is the innermost dimension. This matches the in-memory layout of RowMajor Eigen tensors. If "dim.size()" > 0, "unknown_rank" must be false.
If true, the number of dimensions in the shape is unknown. If true, "dim.size()" must be 0.
One dimension of the tensor.
Used in:
Size of the tensor in that dimension. This value must be >= -1, but values of -1 are reserved for "unknown" shapes (values of -1 mean "unknown" dimension). Certain wrappers that work with TensorShapeProto may fail at runtime when deserializing a TensorShapeProto containing a dim value of -1.
Optional name of the tensor dimension.
Can only be interpreted if you know the corresponding TensorShape.
Extent of the slice in all tensor dimensions. Must have one entry for each of the dimension of the tensor that this slice belongs to. The order of sizes is the same as the order of dimensions in the TensorShape.
Extent of the slice in one dimension.
Either both or no attributes must be set. When no attribute is set means: All data in that dimension.
Used in:
Start index of the slice, starting at 0.
Length of the slice: if the length is missing or -1 we will interpret this as "everything in this dimension". We use "oneof" to preserve information about whether the length is present without changing the serialization format from the prior proto2 version of this proto.
Indicates how a distributed variable will be aggregated.
Used in:
`NONE`: This is the default, giving an error if you use a variable-update operation with multiple replicas.
`SUM`: Add the updates across replicas.
`MEAN`: Take the arithmetic mean ("average") of the updates across replicas.
`ONLY_FIRST_REPLICA`: This is for when every replica is performing the same update, but we only want to perform the update once. Used, e.g., for the global step counter.
Protocol buffer representing a Variable.
Name of the variable tensor.
Name of the tensor holding the variable's initial value.
Name of the initializer op.
Name of the snapshot tensor.
Support for saving variables as slices of a larger variable.
Whether to represent this as a ResourceVariable.
Whether this variable should be trained.
Indicates when a distributed variable will be synced.
Indicates how a distributed variable will be aggregated.
Indicates when a distributed variable will be synced.
Used in:
`AUTO`: Indicates that the synchronization will be determined by the current `DistributionStrategy` (eg. With `MirroredStrategy` this would be `ON_WRITE`).
`NONE`: Indicates that there will only be one copy of the variable, so there is no need to sync.
`ON_WRITE`: Indicates that the variable will be updated across devices every time it is written.
`ON_READ`: Indicates that the variable will be aggregated across devices when it is read (eg. when checkpointing or when evaluating an op that uses the variable).
Protocol buffer representing the serialization format of DT_VARIANT tensors.
Used in:
Name of the type of objects being serialized.
Portions of the object that are not Tensors.
Tensors contained within objects being serialized.
Version information for a piece of serialized data There are different types of versions for each type of data (GraphDef, etc.), but they all have the same common shape described here. Each consumer has "consumer" and "min_producer" versions (specified elsewhere). A consumer is allowed to consume this data if producer >= min_producer consumer >= min_consumer consumer not in bad_consumers
Used in:
The version of the code that produced this data.
Any consumer below this version is not allowed to consume this data.
Specific consumer versions which are disallowed (e.g. due to bugs).