Get desktop application:
View/edit binary Protocol Buffers messages
Used in:
,Total number of bytes requested
Total number of bytes allocated if known
Name of the allocator used
Identifier of the allocated buffer if known
Set if this tensor only has one remaining reference
Address of the allocation.
Used in:
An asset file def for a single file or a set of sharded files with the same name.
Used in:
The tensor to bind the asset filename to.
The filename within an assets directory. Note: does not include the path prefix, i.e. directories. For an asset at /tmp/path/vocab.txt, the filename would be "vocab.txt".
Protocol buffer representing the value for an attr used to configure an Op. Comment indicates the corresponding attr type. Only the field matching the attr type may be filled.
Used in:
, , , , ,"string"
"int"
"float"
"bool"
"type"
"shape"
"tensor"
any "list(...)"
"func" represents a function. func.name is a function's name or a primitive op's name. func.attr.first is the name of an attr defined for that function. func.attr.second is the value for that attr in the instantiation.
This is a placeholder only used in nodes defined inside a function. It indicates the attr value will be supplied when the function is instantiated. For example, let us suppose a node "N" in function "FN". "N" has an attr "A" with value placeholder = "foo". When FN is instantiated with attr "foo" set to "bar", the instantiated node N's attr A will have been given the value "bar".
LINT.IfChange
Used in:
"list(string)"
"list(int)"
"list(float)"
"list(bool)"
"list(type)"
"list(shape)"
"list(tensor)"
Matches DeviceAttributes
Used in:
Device name.
Device type, e.g. 'CPU' or 'GPU'.
Memory capacity in bytes.
The physical description of this device.
Used in:
Each unit test or benchmark in a test or benchmark run provides some set of information. Here we provide some reasonable keys one would expect to see, with optional key/value pairs for things we haven't considered. This BenchmarkEntry should be emitted by each unit test or benchmark reporter.
Used in:
The name of the specific benchmark or test (e.g. BM_AdjustContrast_gpu_B_W_H)
If a benchmark, how many iterations it was run for
Total cpu time used for all iterations (in seconds)
Total wall time used for all iterations (in seconds)
Throughput (in MB/s)
Generic map from result key to value.
Used in:
opt, dbg, etc
CC compiler flags, if known
Bazel compilation options, if known
Describes the metadata related to a checkpointed tensor.
The tensor dtype and shape.
The binary content of the tensor lies in: File "shard_id": bytes [offset, offset + size).
The CRC32C checksum of the tensor bytes.
Iff present, this entry represents a partitioned tensor. The previous fields are interpreted as follows: "dtype", "shape": describe the full tensor. "shard_id", "offset", "size", "crc32c": all IGNORED. These information for each slice can be looked up in their own BundleEntryProto, keyed by each "slice_name".
Special header that is associated with a bundle. TODO(zongheng,zhifengc): maybe in the future, we can add information about which binary produced this checkpoint, timestamp, etc. Sometime, these can be valuable debugging information. And if needed, these can be used as defensive information ensuring reader (binary version) of the checkpoint and the writer (binary version) must match within certain range, etc.
Number of data files in the bundle.
Versioning of the tensor bundle format.
An enum indicating the endianness of the platform that produced this bundle. A bundle can only be read by a platform with matching endianness. Defaults to LITTLE, as most modern platforms are little-endian. Affects the binary tensor data bytes only, not the metadata in protobufs.
Used in:
Used in:
How fast are these cpus?
Additional cpu information. For example, Intel Ivybridge with HyperThreading (24 cores) dL1:32KB dL2:256KB dL3:30MB
What kind of cpu scaling is enabled on the host. Examples include "performance", "ondemand", "conservative", "mixed".
Cache sizes (in bytes), e.g. "L2": 262144 (for 256KB)
Used as request type in: grpc.WorkerService.CleanupAll
A list of container names. If 'container' is not empty, releases resoures in the given containers in all devices. If 'container' is empty, releases resources in the default container in all devices.
Used as response type in: grpc.WorkerService.CleanupAll
(message has no fields)
Used as request type in: grpc.WorkerService.CleanupGraph
Used as response type in: grpc.WorkerService.CleanupGraph
(message has no fields)
Used as request type in: grpc.MasterService.CloseSession
REQUIRED: session_handle must be returned by a CreateSession call to the same master service.
Used as response type in: grpc.MasterService.CloseSession
(message has no fields)
Defines a TensorFlow cluster as a set of jobs.
Used in:
The jobs that comprise the cluster.
CollectionDef should cover most collections. To add a user-defined collection, do one of the following: 1. For simple data types, such as string, int, float: tf.add_to_collection("your_collection_name", your_simple_value) strings will be stored as bytes_list. 2. For Protobuf types, there are three ways to add them: 1) tf.add_to_collection("your_collection_name", your_proto.SerializeToString()) collection_def { key: "user_defined_bytes_collection" value { bytes_list { value: "queue_name: \"test_queue\"\n" } } } or 2) tf.add_to_collection("your_collection_name", str(your_proto)) collection_def { key: "user_defined_string_collection" value { bytes_list { value: "\n\ntest_queue" } } } or 3) any_buf = any_pb2.Any() tf.add_to_collection("your_collection_name", any_buf.Pack(your_proto)) collection_def { key: "user_defined_any_collection" value { any_list { value { type_url: "type.googleapis.com/tensorflow.QueueRunnerDef" value: "\n\ntest_queue" } } } } 3. For Python objects, implement to_proto() and from_proto(), and register them in the following manner: ops.register_proto_function("your_collection_name", proto_type, to_proto=YourPythonObject.to_proto, from_proto=YourPythonObject.from_proto) These functions will be invoked to serialize and de-serialize the collection. For example, ops.register_proto_function(ops.GraphKeys.GLOBAL_VARIABLES, proto_type=variable_pb2.VariableDef, to_proto=Variable.to_proto, from_proto=Variable.from_proto)
Used in:
AnyList is used for collecting Any protos.
Used in:
BytesList is used for collecting strings and serialized protobufs. For example: collection_def { key: "trainable_variables" value { bytes_list { value: "\n\017conv1/weights:0\022\024conv1/weights/Assign \032\024conv1/weights/read:0" value: "\n\016conv1/biases:0\022\023conv1/biases/Assign\032 \023conv1/biases/read:0" } } }
Used in:
FloatList is used for collecting float values.
Used in:
Int64List is used for collecting int, int64 and long values.
Used in:
NodeList is used for collecting nodes in graph. For example collection_def { key: "summaries" value { node_list { value: "input_producer/ScalarSummary:0" value: "shuffle_batch/ScalarSummary:0" value: "ImageSummary:0" } }
Used in:
Used in:
Hash of intermediate change between hash/changelist and what was tested. Not used if the build is from a commit without modifications.
Protocol buffer representing a CondContext object.
Name of the context.
Name of the pred tensor.
Name of the pivot tensor.
Branch prediction. 0 or 1.
Values and external values in control flow context.
Session configuration parameters. The system picks appropriate values for fields that are not set.
Used in:
,Map from device type name (e.g., "CPU" or "GPU" ) to maximum number of devices of that type to use. If a particular device type is not found in the map, the system picks an appropriate number.
The execution of an individual op (for some op types) can be parallelized on a pool of intra_op_parallelism_threads. 0 means the system picks an appropriate number.
Nodes that perform blocking operations are enqueued on a pool of inter_op_parallelism_threads available in each process. 0 means the system picks an appropriate number. Note that the first Session created in the process sets the number of threads for all future sessions unless use_per_session_threads is true or session_inter_op_thread_pool is configured.
If true, use a new set of threads for this session rather than the global pool of threads. Only supported by direct sessions. If false, use the global threads created by the first session, or the per-session thread pools configured by session_inter_op_thread_pool. This option is deprecated. The same effect can be achieved by setting session_inter_op_thread_pool to have one element, whose num_threads equals inter_op_parallelism_threads.
This option is experimental - it may be replaced with a different mechanism in the future. The intended use is for when some session invocations need to run in a background pool limited to a small number of threads. Configures session thread pools. If this is configured, then RunOptions for a Run call can select the thread pool to use. If a pool's num_threads is 0, then inter_op_parallelism_threads is used.
Assignment of Nodes to Devices is recomputed every placement_period steps until the system warms up (at which point the recomputation typically slows down automatically).
When any filters are present sessions will ignore all devices which do not match the filters. Each filter can be partially specified, e.g. "/job:ps" "/job:worker/replica:3", etc.
Options that apply to all GPUs.
Whether soft placement is allowed. If allow_soft_placement is true, an op will be placed on CPU if 1. there's no GPU implementation for the OP or 2. no GPU devices are known or registered or 3. need to co-locate with reftype input(s) which are from CPU.
Whether device placements should be logged.
Options that apply to all graphs.
Global timeout for all blocking operations in this session. If non-zero, and not overridden on a per-operation basis, this value will be used as the deadline for all blocking operations.
Used in:
,Used in:
The name of the node. Names are globally unique.
The device of the node. Can be empty if the node is mapped to the default partition or partitioning hasn't been run yet.
The id of the node. Node ids are only unique inside a partition.
Temporary memory used by this node.
Estimate of the computational cost of this node.
If true, the output is permanent: it can't be discarded, because this node is part of the "final output". Nodes may depend on final nodes.
Ids of the control inputs for this node.
Inputs of this node. They must be executed before this node can be executed. An input is a particular output of another node, specified by the node id and the output index.
Used in:
Outputs of this node.
Used in:
If >= 0, the output is an alias of an input. Note that an alias input may itself be an alias. The algorithm will therefore need to follow those pointers.
Used as request type in: grpc.MasterService.CreateSession
The initial graph definition.
Configuration options.
Used as response type in: grpc.MasterService.CreateSession
The session handle to be used in subsequent calls for the created session. The client must arrange to call CloseSession with this returned session handle to close the session.
The initial version number for the graph, to be used in the next call to ExtendSession.
LINT.IfChange
Used in:
, , , , , , , ,Not a legal value for DataType. Used to indicate a DataType field has not been set.
Data types that all computation devices are expected to be capable to support.
Single-precision complex
Quantized int8
Quantized uint8
Quantized int32
Float32 truncated to 16 bits. Only for cast ops.
Quantized int16
Quantized uint16
Double-precision complex
Do not use! These are only for parameters. Every enum above should have a corresponding value below (verified by types_test).
EXPERIMENTAL. Option for watching a node.
Used in:
Name of the node to watch.
Output slot to watch. The semantics of output_slot == -1 is that the node is only watched for completion, but not for any output tensors. See NodeCompletionCallback in debug_gateway.h. TODO(cais): Implement this semantics.
Name(s) of the debugging op(s). One or more than one probes on a tensor. e.g., {"DebugIdentity", "DebugNanCount"}
URL(s) for debug targets(s). E.g., "file:///foo/tfdbg_dump", "grpc://localhost:11011" Each debug op listed in debug_ops will publish its output tensor (debug signal) to all URLs in debug_urls.
Used as request type in: grpc.WorkerService.DeregisterGraph
REQUIRED: graph_handle must be returned by a RegisterGraph call to the same WorkerService.
TODO(mrry): Optionally add summary stats for the graph.
Used as response type in: grpc.WorkerService.DeregisterGraph
(message has no fields)
Used in:
,Fully specified name of the device within a cluster.
String representation of device_type.
Memory capacity of device in bytes.
Platform-specific data about device that may be useful for supporting efficient data transfers.
A device is assigned a global unique number each time it is initialized. "incarnation" should never be 0.
String representation of the physical device that this device maps to.
Used in:
,Optional bus locality of device. Default value of 0 means no specific locality. Specific localities are indexed from 1.
Used in:
Used in:
Protocol buffer representing an event that happened during the execution of a Brain model.
Timestamp of the event.
Global step of the event.
An event file was started, with the specified version. This is use to identify the contents of the record IO files easily. Current version is "brain.Event:2". All versions start with "brain.Event:".
An encoded version of a GraphDef.
A summary was generated.
The user output a log message. Not all messages are logged, only ones generated via the Python tensorboard_logging module.
The state of the session which can be used for restarting after crashes.
The metadata returned by running a session.run() call.
An encoded version of a MetaGraphDef.
Options specific to the execution of a single step.
Used in:
Used as request type in: grpc.MasterService.ExtendSession
REQUIRED: session_handle must be returned by a CreateSession call to the same master service.
REQUIRED: The nodes to be added to the session's graph. If any node has the same name as an existing node, the operation will fail with ILLEGAL_ARGUMENT.
REQUIRED: The version number of the graph to be extended. This will be tested against the current server-side version number, and the operation will fail with FAILED_PRECONDITION if they do not match.
TODO(mrry): Return something about the operation?
Used as response type in: grpc.MasterService.ExtendSession
The new version number for the extended graph, to be used in the next call to ExtendSession.
A function can be instantiated when the runtime can bind every attr with a value. When a GraphDef has a call to a function, it must have binding for every attr defined in the signature. TODO(zhifengc): * device spec, etc.
Used in:
The definition of the function's name, arguments, return values, attrs etc.
Attributes specific to this function definition.
The body of the function.
function.node.ret[*] are unique.
The body of the function. Unlike the NodeDefs in a GraphDef, attrs may have values of type `placeholder` and the `input` field uses the "output" format above.
A mapping from the output arg names from `signature` to the outputs from `node_def` that should be returned by the function.
A node is a multi-value assignment: (ret[0], ret[1], ...) = func(arg[0], arg[1], ...) By convention, "func" is resolved by consulting with a user-defined library first. If not resolved, "func" is assumed to be a builtin op.
Used in:
This node produces multiple outputs. They are named ret[0], ret[1], ..., etc. REQUIRES: function.node.ret[*] are unique across all nodes. REQUIRES: ret.size == func/op def's number of output args.
The op/function name.
Arguments passed to this func/op. arg[i] must be either one of function.signature.input_args[*].name or one of function.node[*].ret[*]. REQUIRES: arg.size == func/op def's number of input args.
Control dependencies. dep[i] must be one of function.node[*].ret[*] or one of function.signature.input_args[*].name.
Attrs. 'attr' maps names defined by 'func's attr defs to attr values. attr values may have placeholders which are substituted recursively by concrete values when this node is instantiated. These placeholders must name an attr listed in the FunctionDef's signature.
A library is a set of named functions.
Used in:
e.g. "Tesla K40c"
Final entry in output of "nvidia-smi -L"
e.g. "0000:04:00.0"
Used in:
A value between 0 and 1 that indicates what fraction of the available GPU memory to pre-allocate for each process. 1 means to pre-allocate all of the GPU memory, 0.5 means the process allocates ~50% of the available GPU memory.
The type of GPU allocation strategy to use. Allowed values: "": The empty string (default) uses a system-chosen default which may change over time. "BFC": A "Best-fit with coalescing" algorithm, simplified from a version of dlmalloc.
Delay deletion of up to this many bytes to reduce the number of interactions with gpu driver code. If 0, the system chooses a reasonable default (several MBs).
If true, the allocator does not pre-allocate the entire specified GPU memory region, instead starting small and growing as needed.
A comma-separated list of GPU ids that determines the 'visible' to 'virtual' mapping of GPU devices. For example, if TensorFlow can see 8 GPU devices in the process, and one wanted to map visible GPU devices 5 and 3 as "/gpu:0", and "/gpu:1", then one would specify this field as "5,3". This field is similar in spirit to the CUDA_VISIBLE_DEVICES environment variable, except it applies to the visible GPU devices in the process. NOTE: The GPU driver provides the process with the visible GPUs in an order which is not guaranteed to have any correlation to the *physical* GPU id in the machine. This field is used for remapping "visible" to "virtual", which means this operates only after the process starts. Users are required to use vendor specific mechanisms (e.g., CUDA_VISIBLE_DEVICES) to control the physical to visible device mapping prior to invoking TensorFlow.
Used as request type in: grpc.WorkerService.GetStatus
(message has no fields)
Used as response type in: grpc.WorkerService.GetStatus
GradientDef defines the gradient function of a function defined in a function library. A gradient function g (specified by gradient_func) for a function f (specified by function_name) must follow the following: The function 'f' must be a numerical function which takes N inputs and produces M outputs. Its gradient function 'g', which is a function taking N + M inputs and produces N outputs. I.e. if we have (y1, y2, ..., y_M) = f(x1, x2, ..., x_N), then, g is (dL/dx1, dL/dx2, ..., dL/dx_N) = g(x1, x2, ..., x_N, dL/dy1, dL/dy2, ..., dL/dy_M), where L is a scalar-value function of (x1, x2, ..., xN) (e.g., the loss function). dL/dx_i is the partial derivative of L with respect to x_i.
Used in:
The function name.
The gradient function's name.
Represents the graph of operations
Used in:
, , , ,Compatibility versions of the graph. See core/public/version.h for version history. The GraphDef version is distinct from the TensorFlow version, and each release of TensorFlow will support a range of GraphDef versions.
Deprecated single version field; use versions above instead. Since all GraphDef changes before "versions" was introduced were forward compatible, this field is entirely ignored.
EXPERIMENTAL. DO NOT USE OR DEPEND ON THIS YET. "library" provides user-defined functions. Naming: * library.function.name are in a flat namespace. NOTE: We may need to change it to be hierarchical to support different orgs. E.g., { "/google/nn", { ... }}, { "/google/vision", { ... }} { "/org_foo/module_bar", { ... }} map<string, FunctionDefLib> named_lib; * If node[i].op is the name of one function in "library", node[i] is deemed as a function call. Otherwise, node[i].op must be a primitive operation supported by the runtime. Function call semantics: * The callee may start execution as soon as some of its inputs are ready. The caller may want to use Tuple() mechanism to ensure all inputs are ready in the same time. * The consumer of return values may start executing as soon as the return values the consumer depends on are ready. The consumer may want to use Tuple() mechanism to ensure the consumer does not start until all return values of the callee function are ready.
Used in:
,If true, use control flow to schedule the activation of Recv nodes. (Currently ignored.)
Options controlling how graph is optimized.
The number of steps to run before returning a cost model detailing the memory usage and performance of each node of the graph. 0 means no cost model.
The number of steps to skip before collecting statistics for the cost model.
Annotate each Node with Op output shape data, to the extent it can be statically inferred.
Only place the subgraphs that are run, rather than the entire graph. This is useful for interactive graph building, where one might produce graphs that cannot be placed during the debugging process. In particular, it allows the client to continue work in a session after adding a node to a graph whose placement constraints are unsatisfiable.
If true, transfer float values between processes as bfloat16.
If > 0, record a timeline every this many steps. EXPERIMENTAL: This currently has no effect in MasterSession.
Serialization format for histogram module in core/lib/histogram/histogram.h
Used in:
Parallel arrays encoding the bucket boundaries and the bucket values. bucket(i) is the count for the bucket i. The range for a bucket is: i == 0: -DBL_MAX .. bucket_limit(0) i != 0: bucket_limit(i-1) .. bucket_limit(i)
Defines a single job in a TensorFlow cluster.
Used in:
The name of this job.
Mapping from task ID to "hostname:port" string. If the `name` field contains "worker", and the `tasks` map contains a mapping from 7 to "example.org:2222", then the device prefix "/job:worker/task:7" will be assigned to "example.org:2222". NOTE(mrry): Currently, only a dense task ID space starting at 0 is supported.
Must match the name of an Op.
Type of device this kernel runs on.
Names of the Op's input_/output_args that reside in host memory instead of device memory.
This allows experimental kernels to be registered for an op that won't be used unless the user specifies a "_kernel" attr with value matching this.
Used in:
Name of an attr from the Op.
A list of values that this kernel supports for this attr. Like OpDef.AttrDef.allowed_values, except for kernels instead of Ops.
Used in:
Used as request type in: grpc.MasterService.ListDevices
(message has no fields)
Used as response type in: grpc.MasterService.ListDevices
Protocol buffer used for logging messages to the events file.
Used in:
Used in:
Out-of-band request to begin or end logging, or to retrieve logs for particular steps.
Used as request type in: grpc.WorkerService.Logging
If true, RPC logging will be activated.
If true, discard any saved logging data (for all steps).
When set, requests all saved log data pertaining to the step. Any log data retrieved is eliminated from the store and cannot be retrieved again.
Used as response type in: grpc.WorkerService.Logging
Used in:
Host name of machine that ran the benchmark.
Unique serial number of the machine.
Additional platform information.
CPU Information.
Other devices that are attached and relevant (e.g. GPUInfo).
Devices accessible to the test (e.g. as given by list_local_devices).
A directory of regions in a memmapped file.
A message that describes one region of memmapped file.
Used in:
Used in:
Total virtual memory in bytes
Immediately available memory in bytes
Process-unique step id.
Name of the operation making the allocation.
Number of bytes in the allocation.
Address of the allocation.
Id of the tensor buffer being allocated, used to match to a corresponding deallocation.
Name of the allocator used.
Process-unique step id.
Name of the operation making the deallocation.
Id of the tensor buffer being deallocated, used to match to a corresponding allocation.
Name of the allocator used.
True if the deallocation is queued and will be performed later, e.g. for GPU lazy freeing of buffers.
Process-unique step id.
Handle describing the feeds and fetches of the step.
Process-unique step id.
Name of the kernel making the allocation as set in GraphDef, e.g., "affine2/weights/Assign".
Allocated tensor details.
Id of the tensor buffer being deallocated, used to match to a corresponding allocation.
Name of the allocator used.
Process-unique step id.
Name of the kernel producing an output as set in GraphDef, e.g., "affine2/weights/Assign".
Index of the output being set.
Output tensor details.
NOTE: This protocol buffer is evolving, and will go through revisions in the coming months. Protocol buffer containing the following which are necessary to restart training, run inference. It can be used to serialize/de-serialize memory objects necessary for running computation in a graph when crossing the process boundary. It can be used for long term storage of graphs, cross-language execution of graphs, etc. MetaInfoDef GraphDef SaverDef CollectionDef TensorInfo SignatureDef
Used in:
GraphDef.
SaverDef.
collection_def: Map from collection name to collections. See CollectionDef section for details.
signature_def: Map from user supplied key for a signature to a single SignatureDef.
Asset file def to be used with the defined graph.
Meta information regarding the graph to be exported. To be used by users of this protocol buffer to encode information regarding their meta graph.
Used in:
Version string. Can be the name of the model and revision, steps this model has been trained to, etc.
A copy of the OpDefs used by the producer of this graph_def. Descriptions and Ops not used in graph_def are stripped out.
A serialized protobuf. Can be the time this meta graph is created, or modified, or name of the model.
User supplied tag(s) on the meta_graph and included graph_def. MetaGraphDefs should be tagged with their capabilities or use-cases. Examples: "train", "serve", "gpu", "tpu", etc. These tags enable loaders to access the MetaGraph(s) appropriate for a specific use-case or runtime environment.
A list of attr names and their values. The whole list is attached with a string name. E.g., MatMul[T=float].
Used in:
A pair of tensor name and tensor values.
Used in:
,The name of the named tensor.
The value of the named tensor.
A pair of tensor name and tensor values.
Used in:
,Name of the tensor.
The client can populate a TensorProto using a tensorflow::Tensor`, or directly using the protobuf field accessors. The client specifies whether the returned tensor values should be filled tensor fields (float_val, int_val, etc.) or encoded in a compact form in tensor.tensor_content.
Used in:
,The name given to this operator. Used for naming inputs, logging, visualization, etc. Unique within a single GraphDef. Must match the regexp "[A-Za-z0-9.][A-Za-z0-9_./]*".
The operation name. There may be custom parameters in attrs. Op names starting with an underscore are reserved for internal use.
Each input is "node:src_output" with "node" being a string name and "src_output" indicating which output tensor to use from "node". If "src_output" is 0 the ":0" suffix can be omitted. Regular inputs may optionally be followed by control inputs that have the format "^node".
A (possibly partial) specification for the device on which this node should be placed. The expected syntax for this string is as follows: DEVICE_SPEC ::= COLOCATED_NODE | PARTIAL_SPEC COLOCATED_NODE ::= "@" NODE_NAME // See NodeDef.name above. PARTIAL_SPEC ::= ("/" CONSTRAINT) * CONSTRAINT ::= ("job:" JOB_NAME) | ("replica:" [1-9][0-9]*) | ("task:" [1-9][0-9]*) | ( ("gpu" | "cpu") ":" ([1-9][0-9]* | "*") ) Valid values for this string include: * "@other/node" (colocate with "other/node") * "/job:worker/replica:0/task:1/gpu:3" (full specification) * "/job:worker/gpu:3" (partial specification) * "" (no specification) If the constraints do not resolve to a single device (or if this field is empty or not present), the runtime will attempt to choose a device automatically.
Operation-specific graph-construction-time configuration. Note that this should include all attrs defined in the corresponding OpDef, including those with a value matching the default -- this allows the default to change and makes NodeDefs easier to interpret on their own. However, if an attr with a default is not specified in this list, the default will be used. The "names" (keys) must match the regexp "[a-z][a-z0-9_]+" (and one of the names from the corresponding OpDef's attr field). The values must have a type matching the corresponding OpDef attr's type field. TODO(josh11b): Add some examples here showing best practices.
Time/size stats recorded for a single execution of a graph node.
Used in:
TODO(tucker): Use some more compact form of node identity than the full string name. Either all processes should agree on a global id (cost_id?) for each node, or we should use a hash of the name.
Output sizes recorded for a single execution of a graph node.
Used in:
Defines an operation. A NodeDef in a GraphDef specifies an Op by using the "op" field which should match the name of a OpDef.
Used in:
,Op names starting with an underscore are reserved for internal use. Names should be CamelCase and match the regexp "[A-Z][a-zA-Z0-9_]*".
Description of the input(s).
Description of the output(s).
Optional deprecation based on GraphDef versions.
One-line human-readable description of what the Op does.
Additional, longer human-readable description of what the Op does.
True if the operation is commutative ("op(a,b) == op(b,a)" for all inputs)
If is_aggregate is true, then this operation accepts N >= 2 inputs and produces 1 output all of the same type. Should be associative and commutative, and produce output with the same shape as the input. The optimizer may replace an aggregate op taking input from multiple devices with a tree of aggregate ops that aggregate locally within each device (and possibly within groups of nearby devices) before communicating. TODO(josh11b): Implement that optimization.
for things like add
By default Ops may be moved between devices. Stateful ops should either not be moved, or should only be moved if that state can also be moved (e.g. via some sort of save / restore). Stateful ops are guaranteed to never be optimized away by Common Subexpression Elimination (CSE).
for things like variables, queue
By default, all inputs to an Op must be initialized Tensors. Ops that may initialize tensors for the first time should set this field to true, to allow the Op to take an uninitialized Tensor as input.
for Assign, etc.
For describing inputs and outputs.
Used in:
Name for the input/output. Should match the regexp "[a-z][a-z0-9_]*".
Human readable description.
Describes the type of one or more tensors that are accepted/produced by this input/output arg. The only legal combinations are: * For a single tensor: either the "type" field is set or the "type_attr" field is set to the name of an attr with type "type". * For a sequence of tensors with the same type: the "number_attr" field will be set to the name of an attr with type "int", and either the "type" or "type_attr" field will be set as for single tensors. * For a sequence of tensors, the "type_list_attr" field will be set to the name of an attr with type "list(type)".
if specified, attr must have type "type"
if specified, attr must have type "int"
If specified, attr must have type "list(type)", and none of type, type_attr, and number_attr may be specified.
For inputs: if true, the inputs are required to be refs. By default, inputs can be either refs or non-refs. For outputs: if true, outputs are refs, otherwise they are not.
Description of the graph-construction-time configuration of this Op. That is to say, this describes the attr fields that will be specified in the NodeDef.
Used in:
A descriptive name for the argument. May be used, e.g. by the Python client, as a keyword argument name, and so should match the regexp "[a-z][a-z0-9_]+".
One of the type names from attr_value.proto ("string", "list(string)", "int", etc.).
A reasonable default for this attribute if the user does not supply a value. If not specified, the user must supply a value.
Human-readable description.
For type == "int", this is a minimum value. For "list(___)" types, this is the minimum length.
The set of allowed values. Has type that is the "list" version of the "type" field above (uses the "list" field of AttrValue). If type == "type" or "list(type)" above, then the "type" field of "allowed_values.list" has the set of allowed DataTypes. If type == "string" or "list(string)", then the "s" field of "allowed_values.list" has the set of allowed strings.
Information about version-dependent deprecation of an op
Used in:
First GraphDef version at which the op is disallowed.
Explanation of why it was deprecated and what to use instead.
A collection of OpDefs
Used in:
Options passed to the graph optimizer
Used in:
If true, optimize the graph using common subexpression elimination.
If true, perform constant folding optimization on the graph.
If true, perform function inlining on the graph.
Optimization level
Used in:
L1 is the default level. Optimization performed at L1 : 1. Common subexpression elimination 2. Constant folding
No optimizations
Used as request type in: grpc.MasterService.PartialRunSetup
REQUIRED: session_handle must be returned by a CreateSession call to the same master service.
Tensors to be fed in future steps.
Fetches. A list of tensor names. The caller expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor), for corresponding partial RunStepRequests. The order of specified fetches does not change the execution order.
Target Nodes. A list of node names. The named nodes will be run in future steps, but their outputs will not be fetched.
Used as response type in: grpc.MasterService.PartialRunSetup
The unique handle corresponding to the ongoing partial run call setup by the invocation to PartialRunSetup. This handle may be passed to RunStepRequest to send and receive tensors for this partial run.
Used in:
e.g. '64bit'
e.g. 'ELF'
e.g. 'i386'
e.g. '3.13.0-76-generic'
e.g. 'Linux'
e.g. '#120-Ubuntu SMP Mon Jan 18 15:59:10 UTC 2016'
Protocol buffer representing a QueueRunner.
Queue name.
A list of enqueue operations.
The operation to run to close the queue.
The operation to run to cancel the queue.
A list of exception types considered to signal a safely closed queue if raised during enqueue operations.
For serializing and restoring the state of ReaderBase, see reader_base.h for details.
Used as request type in: grpc.WorkerService.RecvTensor
The step in which the tensor will be produced. REQUIRED: This must eventually correspond to the `step_id` passed into a RunGraph call on the same WorkerService.
A key that identifies the tensor to be received.
If true, use an out-of-band DMA mechanism to transfer the received tensor.
Optional information on client-side device locality.
Optional information on server-side device locality.
Used as response type in: grpc.WorkerService.RecvTensor
The tensor as a proto.
If true, this tensor was the output of a dead node, and the content is invalid.
The time at which tensor was available and started to be returned.
Optional additional information about how to receive the tensor, in the event that `RecvTensorRequest.dma_ok` was true.
Used as request type in: grpc.WorkerService.RegisterGraph
Subgraphs are scoped within one session.
"graph_def" has the subgraph of nodes for this worker, with each node having its device_name filled in.
True iff the graph (before partitioning) contains control flow nodes. As of 01/11/2015, this is no longer set by clients.
Configuration options for the session in which this graph was created.
Used as response type in: grpc.WorkerService.RegisterGraph
If the registration succeeds, returns an opaque graph_handle to the master. The master calls RunGraph with graph_handle to compute different steps.
Used as request type in: grpc.MasterService.Reset
A list of container names, which may be empty. If 'container' is not empty, releases resoures in the given containers in all devices. If 'container' is empty, releases resources in the default container in all devices.
Used as response type in: grpc.MasterService.Reset
(message has no fields)
Protocol buffer representing a handle to a tensorflow resource. Handles are not valid across executions, but can be serialized back and forth from within a single run.
Used in:
Unique name for the device containing the resource.
Container in which this resource is placed.
Unique name of this resource.
Hash code for the type of the resource. Is only valid in the same device and in the same execution.
For debug-only, the name of the type pointed to by this handle, if available.
Run-specific items such as arguments to the test / benchmark.
Used in:
Used as request type in: grpc.WorkerService.RunGraph
REQUIRED: graph_handle must be returned by a RegisterGraph call to the same WorkerService.
A unique ID to distinguish different runs of the same graph. The master generates a global unique `step_id` to distinguish different runs of the graph computation. Subgraphs communicate (e.g., send/recv ops) with each other using `step_id` to distinguish tensors generated by different runs.
Options for this step.
Runs the graph. Sends the tensors in "send" into the graph before the run and fetches the keys into `RunGraphResponse.recv` after the run.
True if the RunGraphRequest is a partial run request.
True if this is the last partial run request in a sequence of requests.
Used as response type in: grpc.WorkerService.RunGraph
A list of tensors corresponding to those requested by `RunGraphRequest.recv_key`.
If the request asked for execution stats or cost graph, these are returned here.
EXPERIMENTAL. Metadata output (i.e., non-Tensor) for a single Run() call.
Used in:
Statistics traced for this step. Populated if tracing is turned on via the "RunOptions" proto. EXPERIMENTAL: The format and set of events may change in future versions.
The cost graph for the computation defined by the run call.
Graphs of the partitions executed by executors.
EXPERIMENTAL. Options for a single Run() call.
Used in:
Time to wait for operation to complete in milliseconds.
The thread pool to use, if session_inter_op_thread_pool is configured.
Debugging options
Whether the partition graph(s) executed by the executor(s) should be outputted via RunMetadata.
TODO(pbar) Turn this into a TraceOptions proto which allows tracing to be controlled in a more orthogonal manner?
Used in:
Used as request type in: grpc.MasterService.RunStep
REQUIRED: session_handle must be returned by a CreateSession call to the same master service.
Tensors to be fed in the step. Each feed is a named tensor.
Fetches. A list of tensor names. The caller expects a tensor to be returned for each fetch[i] (see RunStepResponse.tensor). The order of specified fetches does not change the execution order.
Target Nodes. A list of node names. The named nodes will be run to but their outputs will not be fetched.
Options for the run call.
Partial run handle (optional). If specified, this will be a partial run execution, run up to the specified fetches.
Used as response type in: grpc.MasterService.RunStep
NOTE: The order of the returned tensors may or may not match the fetch order specified in RunStepRequest.
Returned metadata if requested in the options.
Used in:
Name of the full variable of which this is a slice.
Shape of the full variable.
Offset of this variable into the full variable.
Shape of this variable.
SavedModel is the high level serialization format for TensorFlow Models. See [todo: doc links, similar to session_bundle] for more information.
The schema version of the SavedModel instance. Used for versioning when making future changes to the specification/implementation. Initial value at release will be 1.
One or more MetaGraphs.
Saved tensor slice: it stores the name of the tensors, the slice, and the raw data.
Used in:
Name of the tensor that this slice belongs to. This must be identical to the name used to encode the key for this record.
Extent of the slice. Must have one entry for each of the dimension of the tensor that this slice belongs to.
The raw data of the slice is stored as a TensorProto. Only raw data are stored (we don't fill in fields such as dtype or tensor_shape).
Metadata describing the set of slices of the same tensor saved in a checkpoint file.
Used in:
Name of the tensor.
Shape of the tensor
Type of the tensor
Explicit list of slices saved in the checkpoint file.
Metadata describing the set of tensor slices saved in a checkpoint file. It is always stored at the beginning of each checkpoint file.
Used in:
Each SavedSliceMeta describes the slices for one tensor.
Compatibility version of this checkpoint. See core/public/version.h for version history.
Each record in a v3 checkpoint file is a serialized SavedTensorSlices message.
This is only present at the first item of each checkpoint file and serves as a table of contents, listing all the tensor slices saved in this file.
This exists in all but the first item of each checkpoint file.
Protocol buffer representing the configuration of a Saver.
Used in:
The name of the tensor in which to specify the filename when saving or restoring a model checkpoint.
The operation to run when saving a model checkpoint.
The operation to run when restoring a model checkpoint.
Maximum number of checkpoints to keep. If 0, no checkpoints are deleted.
Shard the save files, one per device that has Variable nodes.
How often to keep an additional checkpoint. If not specified, only the last "max_to_keep" checkpoints are kept; if specified, in addition to keeping the last "max_to_keep" checkpoints, an additional checkpoint will be kept for every n hours of training.
A version number that identifies a different on-disk checkpoint format. Usually, each subclass of BaseSaverBuilder works with a particular version/format. However, it is possible that the same builder may be upgraded to support a newer checkpoint format in the future.
Used in:
Internal legacy format.
Current format: tf.Saver() which works with tensorflow::table::Table.
Experimental format under development.
Defines the configuration of a single TensorFlow server.
The cluster of which this server is a member.
The name of the job of which this server is a member. NOTE(mrry): The `cluster` field must contain a `JobDef` with a `name` field that matches this name.
The task index of this server in its job. NOTE: The `cluster` field must contain a `JobDef` with a matching `name` and a mapping in its `tasks` field for this index.
The default configuration for sessions that run on this server.
The protocol to be used by this server. Acceptable values include: "grpc".
Protocol buffer used for logging session state.
Used in:
This checkpoint_path contains both the path and filename.
Used in:
SignatureDef defines the signature of a computation supported by a TensorFlow graph. For example, a model with two loss computations, sharing a single input, might have the following signature_def map. Note that across the two SignatureDefs "loss_A" and "loss_B", the input key, output key, and method_name are identical, and will be used by system(s) that implement or rely upon this particular loss method. The output tensor names differ, demonstrating how different outputs can exist for the same method. signature_def { key: "loss_A" value { inputs { key: "input" value { name: "input:0" dtype: DT_STRING tensor_shape: ... } } outputs { key: "loss_output" value { name: "loss_output_A:0" dtype: DT_FLOAT tensor_shape: ... } } } ... method_name: "some/package/compute_loss" } signature_def { key: "loss_B" value { inputs { key: "input" value { name: "input:0" dtype: DT_STRING tensor_shape: ... } } outputs { key: "loss_output" value { name: "loss_output_B:0" dtype: DT_FLOAT tensor_shape: ... } } } ... method_name: "some/package/compute_loss" }
Used in:
Named input parameters.
Named output parameters.
Extensible method_name information enabling third-party users to mark a SignatureDef as supporting a particular method. This enables producers and consumers of SignatureDefs, e.g. a model definition library and a serving library to have a clear hand-off regarding the semantics of a computation. Note that multiple SignatureDefs in a single MetaGraphDef may have the same method_name. This is commonly used to support multi-headed computation, where a single graph computation may return multiple results.
Used in:
, ,A Summary is a set of named values to be displayed by the visualizer. Summaries are produced regularly during training, as controlled by the "summary_interval_secs" attribute of the training operation. Summaries are also produced at the end of an evaluation.
Used in:
Set of values for the summary.
Used in:
Sample rate of the audio in Hz.
Number of channels of audio.
Length of the audio in frames (samples per channel).
Encoded audio data and its associated RFC 2045 content type (e.g. "audio/wav").
Used in:
Dimensions of the image.
Valid colorspace values are 1 - grayscale 2 - grayscale + alpha 3 - RGB 4 - RGBA 5 - DIGITAL_YUV 6 - BGRA
Image data in encoded format. All image formats supported by image_codec::CoderUtil can be stored here.
Used in:
Name of the node that output this summary; in general, the name of a TensorSummary node. If the node in question has multiple outputs, then a ":\d+" suffix will be appended, like "some_op:13". Might not be set for legacy summaries (i.e. those not using the tensor value field)
Tag name for the data. Will only be used by legacy summaries (ie. those not using the tensor value field) For legacy summaries, will be used as the title of the graph in the visualizer. Tag is usually "op_name:value_name", where "op_name" itself can have structure to indicate grouping.
Value associated with the tag.
Metadata associated with a series of Summary data
Hint on how plugins should process the data in this series. Supported values include "scalar", "histogram", "image", "audio"
For logging the metadata output for a single session.run() call.
Used in:
Tag name associated with this metadata.
Byte-encoded version of the `RunMetadata` proto in order to allow lazy deserialization.
Used in:
, ,Data type of tensor elements
Shape of the tensor.
Information about the size and allocator used for the data
Information about a Tensor necessary for feeding or retrieval.
Used in:
,Protocol buffer representing a tensor.
Used in:
, , , , , , , ,Shape of the tensor. TODO(touts): sort out the 0-rank issues.
Version number. In version 0, if the "repeated xxx" representations contain only one element, that element is repeated to fill the shape. This makes it easy to represent a constant Tensor with a single value.
Serialized content from Tensor::AsProtoTensorContent(). This representation can be used for all tensor types.
DT_HALF. Note that since protobuf has no int16 type, we'll have some pointless zero padding for each value here.
DT_FLOAT.
DT_DOUBLE.
DT_INT32, DT_INT16, DT_INT8, DT_UINT8.
DT_STRING
DT_COMPLEX64. scomplex_val(2*i) and scomplex_val(2*i+1) are real and imaginary parts of i-th single precision complex.
DT_INT64
DT_BOOL
DT_COMPLEX128. dcomplex_val(2*i) and dcomplex_val(2*i+1) are real and imaginary parts of i-th double precision complex.
DT_RESOURCE
Dimensions of a tensor.
Used in:
, , , , , , ,Dimensions of the tensor, such as {"input", 30}, {"output", 40} for a 30 x 40 2D tensor. If an entry has size -1, this corresponds to a dimension of unknown size. The names are optional. The order of entries in "dim" matters: It indicates the layout of the values in the tensor in-memory representation. The first entry in "dim" is the outermost dimension used to layout the values, the last entry is the innermost dimension. This matches the in-memory layout of RowMajor Eigen tensors. If "dim.size()" > 0, "unknown_rank" must be false.
If true, the number of dimensions in the shape is unknown. If true, "dim.size()" must be 0.
One dimension of the tensor.
Used in:
Size of the tensor in that dimension. This value must be >= -1, but values of -1 are reserved for "unknown" shapes (values of -1 mean "unknown" dimension). Certain wrappers that work with TensorShapeProto may fail at runtime when deserializing a TensorShapeProto containing a dim value of -1.
Optional name of the tensor dimension.
Can only be interpreted if you know the corresponding TensorShape.
Used in:
, ,Extent of the slice in all tensor dimensions. Must have one entry for each of the dimension of the tensor that this slice belongs to. The order of sizes is the same as the order of dimensions in the TensorShape.
Extent of the slice in one dimension.
Either both or no attributes must be set. When no attribute is set means: All data in that dimension.
Used in:
Start index of the slice, starting at 0.
Length of the slice: if the length is missing or -1 we will interpret this as "everything in this dimension". We use "oneof" to preserve information about whether the length is present without changing the serialization format from the prior proto2 version of this proto.
The output of one benchmark / test run. Each run contains a list of tests or benchmarks, stored as BenchmarkEntry messages. This message should be emitted by the reporter (which runs the test / BM in a subprocess and then reads the emitted BenchmarkEntry messages; usually from a serialized json file, finally collecting them along with additional information about the test run.
The target of the run, e.g.: //tensorflow/core:kernels_adjust_contrast_op_benchmark_test
The list of tests or benchmarks in this run.
The configuration of the build (compiled opt? with cuda? any copts?)
The commit id (git hash or changelist)
The time the run started (in seconds of UTC time since Unix epoch)
The amount of time the total run took (wall time in seconds)
Machine-specific parameters (Platform and CPU info)
Run-specific parameters (arguments, etc)
Benchmark target identifier.
Used in:
The number of threads in the pool. 0 means the system picks a value based on where this option proto is used (see the declaration of the specific field for more info).
Used in:
Length of the trace to be taken, in seconds.
If true, capture step profile locally in each worker. Currently unimplemented.
If true, capture kernel events from each worker.
If true, capture extended profiling events from TensorFlow process.
If true, capture GPU profiling events locally on each machine. Currently unimplemented.
If true, collect sampled profile events. Currently unimplemented.
Out-of-band request to configure distributed tracing.
Used as request type in: grpc.WorkerService.Tracing
Used as response type in: grpc.WorkerService.Tracing
(message has no fields)
Protocol buffer representing the values in ControlFlowContext.
Used in:
,Value names that have been seen in this context.
Value names referenced by but external to this context.
Protocol buffer representing a Variable.
Name of the variable tensor.
Name of the initializer op.
Name of the snapshot tensor.
Support for saving variables as slices of a larger variable.
Version information for a piece of serialized data There are different types of versions for each type of data (GraphDef, etc.), but they all have the same common shape described here. Each consumer has "consumer" and "min_producer" versions (specified elsewhere). A consumer is allowed to consume this data if producer >= min_producer consumer >= min_consumer consumer not in bad_consumers
Used in:
, ,The version of the code that produced this data.
Any consumer below this version is not allowed to consume this data.
Specific consumer versions which are disallowed (e.g. due to bugs).
Protocol buffer representing a WhileContext object.
Name of the context.
The number of iterations allowed to run in parallel.
Whether backprop is enabled for this while loop.
Whether GPU-CPU memory swap is enabled for this loop.
Name of the pivot tensor.
Name of the pivot_for_pred tensor.
Name of the pivot_for_body tensor.
List of names for exit tensors.
Values and external values in control flow context.