Get desktop application:
View/edit binary Protocol Buffers messages
checker name -> a list of reports from the checker.
Used in:
checker name -> a dict of key-value options.
Used in:
It specifies the Python callstack that creates an op.
Used in:
, ,Used in:
deprecated by file_id.
deprecated by function_id.
deprecated line_id.
Used in:
This is the timestamp when the memory information was tracked.
NOTE: Please don't depend on the following 4 fields yet. Due to TensorFlow internal tracing issues, the numbers can be quite wrong. TODO(xpan): Fix the TensorFlow internal tracing.
Total bytes requested by the op.
Total bytes requested by the op and released before op end.
Total bytes requested by the op and not released after op end.
Total bytes output by the op (not necessarily requested by the op).
The total number of bytes currently allocated by the allocator if >0.
The memory of each output of the operation.
Used in:
Can be larger than 1 if run multiple times in loop.
The earliest/latest time including scheduling and execution.
device -> vector of {op_start_micros, op_exec_micros} pairs. accelerator_execs: gpu:id/stream:all -> {op_start_micros, op_exec_micros} For accelerator, vector size can be larger than 1, multiple kernel fires or in tf.while_loop.
cpu_execs: cpu/gpu:id -> {op_start_micros, op_exec_micros} For cpu, vector size can be larger than 1 if in tf.while_loop.
Each entry to memory information of a scheduling of the node. Normally, there will be multiple entries in while_loop.
The allocation and deallocation times and sizes throughout execution.
The devices related to this execution.
Used in:
A node in TensorFlow graph. Used by scope/graph view.
Used in:
op name.
tensor value restored from checkpoint.
op execution time. A node can be defined once but run multiple times in tf.while_loop. the times sum up all different runs.
Total bytes requested by the op.
Max bytes allocated and being used by the op at a point.
Total bytes requested by the op and not released before end.
Total bytes output by the op (not necessarily allocated by the op).
Number of parameters if available.
Number of float operations.
Device the op is assigned to. Since an op can fire multiple kernel calls, there can be multiple devices.
The following are the aggregated stats from all *accounted* children and the node itself. The actual children depend on the data structure used. In graph view, children are inputs recursively. In scope view, children are nodes under the name scope.
shape information, if available. TODO(xpan): Why is this repeated?
Descendants of the graph. The actual descendants depend on the data structure used (scope, graph).
Used in:
A node that groups multiple GraphNodeProto. Depending on the 'view', the semantics of the TFmultiGraphNodeProto is different: code view: A node groups all TensorFlow graph nodes created by the Python code. op view: A node groups all TensorFlow graph nodes that are of type of the op (e.g. MatMul, Conv2D).
Name of the node.
code execution time.
Total requested bytes by the code.
Max bytes allocated and being used by the op at a point.
Total bytes requested by the op and not released before end.
Total bytes output by the op (not necessarily allocated by the op).
Number of parameters if available.
Number of float operations.
The following are the aggregated stats from descendants. The actual descendants depend on the data structure used.
TensorFlow graph nodes contained by the MultiGraphNodeProto.
Descendants of the node. The actual descendants depend on the data structure used.
Used in:
op name.
float_ops is filled by tfprof Python API when called. It requires the op has RegisterStatistics defined. Currently, Conv2D, MatMul, etc, are implemented.
User can define extra op type information for an op. This allows the user to select a group of ops precisely using op_type as a key.
Used to support tfprof "code" view.
Used in:
Maps from id of CodeDef file,function,line to its string In the future can also map other id of other fields to string.
Refers to tfprof_options.h/cc for documentation. Only used to pass tfprof options from Python to C++.
Used in:
graph node name.
graph operation type.
A unique id for the node.
A map from source node id to its output index to current node.
A proto representation of the profiler's profile. It allows serialization, shipping around and deserialization of the profiles. Please don't depend on the internals of the profile proto.
Whether or not has code traces.
Whether or not the TF device tracer fails to return accelerator information (which could lead to 0 accelerator execution time).
Traced steps.
Maps from id of CodeDef file,function,line to its string In the future can also map other id of other fields to string.
Used in:
Flatten tensor in row-major. Only one of the following array is set.
Used in:
,