package tensorflow.profiler

Mouse Melon logoGet desktop application:
View/edit binary Protocol Buffers messages

message ActiveAllocation

memory_profile.proto:88

The active memory allocations at the peak memory usage.

Used in: PerAllocatorMemoryProfile

message AllReduceDbResult

steps_db.proto:154

Result database for all-reduce ops.

Used in: PerCoreStepInfo

message AllReduceInfo

steps_db.proto:136

Result proto for all -educe ops.

Used in: AllReduceDbResult

message AllReduceOpInfo

pod_viewer.proto:19

Used in: PodStatsMap

message BatchDetail

inference_stats.proto:108

Detail of a batch. Next ID: 13

Used in: PerBatchSizeAggregatedResult, PerHostInferenceStats, PerModelInferenceStats, SampledPerModelInferenceStatsProto

message BatchingParameters

inference_stats.proto:232

Batching parameters collected from TFstreamz.

Used in: ModelIdDatabase

message BottleneckAnalysis

input_pipeline.proto:9

Generic hardware bottleneck.

message BufferAllocation

memory_viewer_preprocess.proto:45

Used in: PreprocessResult

message BufferSpan

memory_viewer_preprocess.proto:32

Describes the start / exclusive limit HLO program points for a given buffer lifetime, used for rendering a box on the plot.

Used in: PreprocessResult

message ChannelInfo

pod_viewer.proto:57

Next ID: 14 Information about a send and recv channel.

Used in: PodStatsMap

message CombinedTfDataStats

tf_data_stats.proto:113

TfDataStats of all hosts.

message CoreDetails

op_stats.proto:115

Next ID: 8

Used in: OpStats

message DcnCollectiveInfoProto

dcn_collective_info.proto:6

This proto is based on MegaScaleInfoProto and should be consistent with it.

message DcnCollectiveInfoProto.Endpoint

dcn_collective_info.proto:35

Used in: EndpointGroup, OneToOneGroup

message DcnCollectiveInfoProto.EndpointGroup

dcn_collective_info.proto:40

Used in: DcnCollectiveInfoProto

message DcnCollectiveInfoProto.OneToOneGroup

dcn_collective_info.proto:44

Used in: DcnCollectiveInfoProto

enum DcnCollectiveInfoProto.TransferType

dcn_collective_info.proto:7

Used in: DcnCollectiveInfoProto

message DcnSlack

dcn_slack_analysis.proto:10

Used in: DcnSlackAnalysis

message DcnSlackAnalysis

dcn_slack_analysis.proto:90

message DcnSlackSummary

dcn_slack_analysis.proto:59

Used in: DcnSlackAnalysis

message Device

trace_events.proto:78

A 'device' is a physical entity in the system and is comprised of several resources.

Used in: Trace

message DeviceCapabilities

hardware_types.proto:24

message DeviceMemoryTransfer

steps_db.proto:112

Information about memory transfer to/from device memory.

Used in: PerCoreStepInfo, StepInfoResult

message Diagnostics

diagnostics.proto:7

Used in: InputPipelineAnalysisResult, OpStats, OverviewPage, PodStatsDatabase, PodViewerDatabase, roofline_model.RooflineModelDatabase

message DmaActivity

trace_events_raw.proto:24

DmaActivity can be used to add DMA details to a trace event.

Used in: RawData

message GPUComputeCapability

hardware_types.proto:19

Used in: DeviceCapabilities

message GenericRecommendation

overview_page.proto:105

message GenericStepBreakdown

steps_db.proto:12

Breakdown of step-time on generic hardware. Note that these components are mutually exclusive so that adding them together is equal to the step time. If an execution time interval has multiple types of event happening, we need to pick one of the event type to attribute the time interval to.

message GenericStepTimeBreakdown

input_pipeline.proto:127

enum HardwareType

hardware_types.proto:8

Types of hardware profiled.

Used in: RunEnvironment

message HeapObject

memory_viewer_preprocess.proto:13

Describes a heap object that is displayed in a plot in the memory visualization HTML.

Used in: PreprocessResult

message HostDependentJobInfoResult

op_stats.proto:51

Result proto for host-dependent job information.

Used in: RunEnvironment

message HostIndependentJobInfoResult

op_stats.proto:39

Result proto for host-independent job information.

Used in: RunEnvironment

message InferenceStats

inference_stats.proto:273

Proto consumed by inference analysis.

message InputOpDetails

input_pipeline.proto:97

Used in: InputPipelineAnalysisResult

message InputPipelineAnalysisRecommendation

input_pipeline.proto:117

Used in: InputPipelineAnalysisResult

message InputPipelineAnalysisResult

input_pipeline.proto:153

Used in: OverviewPage

message InputPipelineMetadata

tf_data_stats.proto:55

Metadata for input pipeline.

Used in: InputPipelineStats

enum InputPipelineMetadata.InputPipelineType

tf_data_stats.proto:60

The distribution strategy creates one "host" input pipeline which actually runs tf.data user code. Also, it creates a "device" input pipeline per device (e.g., TensorCore) which takes an element from the host input pipeline and transfers it to the device.

Used in: InputPipelineMetadata

message InputPipelineStat

tf_data_stats.proto:45

Stat and metadata for input pipeline.

Used in: InputPipelineStats

message InputPipelineStats

tf_data_stats.proto:72

Collection of metadata and stats of input pipeline.

Used in: TfDataStats

message InputTimeBreakdown

input_pipeline.proto:82

Used in: InputPipelineAnalysisResult

message IteratorMetadata

tf_data_stats.proto:29

Metadata for iterator.

Used in: TfDataStats

message IteratorStat

tf_data_stats.proto:8

Stat for iterator.

Used in: InputPipelineStat

message KernelReport

kernel_stats.proto:6

Next ID: 15

Used in: KernelStatsDb

message KernelStatsDb

kernel_stats.proto:37

Used in: OpStats

message LayoutAnalysis

op_metrics.proto:82

Data layout of an op.

Used in: OpMetrics

message LayoutAnalysis.Dimension

op_metrics.proto:84

Physical data layout in each tensor dimension.

Used in: LayoutAnalysis

enum LayoutDimensionSemantics

op_metrics.proto:74

What the dimension represents, e.g. spatial, feature or batch.

Used in: LayoutAnalysis.Dimension

message LogicalBuffer

memory_viewer_preprocess.proto:37

Used in: BufferAllocation

message LogicalTopology

topology.proto:36

The logical topology of the job.

message LogicalTopology.HostNetworkAddress

topology.proto:50

The network address of a specific host.

Used in: LogicalHost

message LogicalTopology.LogicalDevice

topology.proto:38

Logical metadata about a specific device.

Used in: LogicalHost

message LogicalTopology.LogicalHost

topology.proto:56

Logical metadata about a specific host.

Used in: LogicalSlice

message LogicalTopology.LogicalSlice

topology.proto:68

Logical metadata about a specific slice.

Used in: LogicalTopology

enum MemBwType

op_metrics.proto:34

Types of memory bandwidth we track in the system.

message MemoryAccessBreakdown

op_metrics.proto:97

A container to serialize this repeated field in "symbolized xplane."

enum MemoryActivity

memory_profile.proto:7

The memory activity that causes change of memory state.

Used in: MemoryActivityMetadata

message MemoryActivityMetadata

memory_profile.proto:38

The metadata associated with each memory allocation/deallocation. It can also be interpreted as the metadata for the delta of memory state. Next ID: 10

Used in: MemoryProfileSnapshot, PerAllocatorMemoryProfile

message MemoryAggregationStats

memory_profile.proto:21

The aggregated memory stats including heap, stack, free memory and fragmentation at a specific time.

Used in: MemoryProfileSnapshot, MemoryProfileSummary

message MemoryProfile

memory_profile.proto:118

Data for memory usage analysis in one host.

message MemoryProfileSnapshot

memory_profile.proto:65

Profile snapshot of the TensorFlow memory at runtime, including MemoryAggregationStats (memory usage breakdown etc.), and MemoryActivityMetadata (allocation or deallocation, TF Op name etc.).

Used in: PerAllocatorMemoryProfile

message MemoryProfileSummary

memory_profile.proto:75

The summary of memory profile within the profiling window duration.

Used in: PerAllocatorMemoryProfile

enum MemorySpace

op_metrics.proto:62

Tensorflow generic memory space names. These space names are used in analysis code to get memory bandwidth per core.

message ModelIdDatabase

inference_stats.proto:252

Model ID database. Unknown model id will be "" and won't be stored here. So if model id is not found in the TF-session metadata, ModelIdDatabase will be empty.

Used in: InferenceStats

message OpInstance

dcn_slack_analysis.proto:5

Used in: DcnSlack

message OpMetrics

op_metrics.proto:103

Metrics for an operation (accumulated over all occurrences). Next ID: 27

Used in: OpMetricsDb

message OpMetrics.MemoryAccessed

op_metrics.proto:137

Breakdown of memory accessed by operation type and memory space.

Used in: MemoryAccessBreakdown, OpMetrics

enum OpMetrics.MemoryAccessed.OperationType

op_metrics.proto:138

Used in: MemoryAccessed

message OpMetricsDb

op_metrics.proto:179

A database for OpMetrics. Next ID: 16

Used in: OpMetrics, OpStats, PerCoreStepInfo

message OpStats

op_stats.proto:133

Next ID: 14 Operator Statistics.

message OverviewInferenceLatency

overview_page.proto:237

Overview result for the inference query latency stats.

Used in: OverviewPage

message OverviewLatencyBreakdown

overview_page.proto:226

Total and breakdown latency for inference query(s). Breakdown into host/device/communication.

Used in: OverviewInferenceLatency

message OverviewPage

overview_page.proto:258

message OverviewPageAnalysis

overview_page.proto:31

Overview result for general analysis.

Used in: OverviewPage

message OverviewPageHostDependentJobInfo

overview_page.proto:179

Result proto for host-dependent job information.

Used in: OverviewPageRunEnvironment

message OverviewPageHostIndependentJobInfo

overview_page.proto:167

Result proto for host-independent job information.

Used in: OverviewPageRunEnvironment

message OverviewPageRecommendation

overview_page.proto:130

Overview result for the recommendation section.

Used in: OverviewPage

message OverviewPageRunEnvironment

overview_page.proto:193

The run environment of a profiling session.

Used in: OverviewPage

message OverviewPageTip

overview_page.proto:100

Overview result for a performance tip to users.

Used in: OverviewPageRecommendation

message OverviewTfOp

overview_page.proto:11

Overview result for a TensorFlow Op.

Used in: OverviewPageAnalysis

message PerAllocatorMemoryProfile

memory_profile.proto:99

Memory profile snapshots per memory allocator.

Used in: MemoryProfile

message PerBatchSizeAggregatedResult

inference_stats.proto:188

Aggregated result per batch size.

Used in: PerModelInferenceStats

message PerCoreStepInfo

steps_db.proto:159

Result proto for information in a step across all cores.

Used in: StepDatabaseResult

message PerGenericStepDetails

input_pipeline.proto:50

Per-step details on generic hardware.

message PerHostInferenceStats

inference_stats.proto:150

Per-host data for inference analysis.

Used in: InferenceStats

message PerModelInferenceStats

inference_stats.proto:197

Per-model data for inference analysis.

Used in: InferenceStats

message PerTpuStepDetails

tpu_input_pipeline.proto:9

Per-step details on TPU. Next ID: 26

message PerfEnv

op_stats.proto:15

Performance environment, e.g the peak performance capabilities of the device.

Used in: OpStats

message PerformanceCounterResult

op_stats.proto:126

Metrics based on hardware performance counters.

Used in: OpStats

message PerformanceInfo

op_metrics.proto:11

Predicted computational cost of the instruction associated with the symbol. Estimated by traversing the HLO graph.

message PerformanceInfo.MemoryAccessed

op_metrics.proto:17

Breakdown of memory accessed by read/write and memory space.

Used in: PerformanceInfo

enum PerformanceInfo.MemoryAccessed.MemorySpace

op_metrics.proto:19

Used in: MemoryAccessed

message PodStatsDatabase

pod_stats.proto:13

A database of PodStats records.

message PodStatsMap

pod_viewer.proto:36

Result proto for information in a step across all cores.

Used in: PodStatsSequence

message PodStatsRecord

pod_stats.proto:25

Next ID: 20 There is one PodStatsRecord for each step traced on each compute node.

Used in: PodStatsDatabase, PodStatsMap

message PodStatsSequence

pod_viewer.proto:51

A sequence of PodStatsMap for each step.

Used in: PodViewerDatabase

message PodViewerDatabase

pod_viewer.proto:113

Next ID: 12 A database of pod viewer records.

message PodViewerSummary

pod_viewer.proto:85

Used in: PodViewerDatabase

message PodViewerTopology

pod_viewer.proto:92

Next ID: 9 Topology graph draws all the cores in the system in a 2-D rectangle or 3-D cube. It is hierarchically grouped by host, chip and core.

Used in: PodViewerDatabase

message PowerComponentMetrics

power_metrics.proto:7

Used in: PowerMetrics

message PowerMetrics

power_metrics.proto:32

Used in: OverviewPageRunEnvironment, RunEnvironment

message PrecisionStats

op_metrics.proto:170

Statistics about the various precision used in computation.

Used in: OpMetricsDb

message PreprocessResult

memory_viewer_preprocess.proto:54

Groups together all results from the preprocessing C++ step.

message RawData

trace_events_raw.proto:15

RawData contains raw data that can be used to attach further details to a TraceEvent. TraceEvents store this raw data in serialized form so it can be decoded on demand. This can improve performance as TraceEvents are often subject to filtering and only a small subset actually needs to be decoded. NEXT ID: 4

message ReplicaGroup

pod_viewer.proto:13

Describes the replica groups in a cross replica op (e.g., all-reduce and all-to-all).

Used in: AllReduceOpInfo

message RequestDetail

inference_stats.proto:38

Detail of a user facing request. Next ID: 22

Used in: PerBatchSizeAggregatedResult, PerHostInferenceStats, PerModelInferenceStats, SampledPerModelInferenceStatsProto

message Resource

trace_events.proto:94

A 'resource' generally is a specific computation component on a device. These can range from threads on CPUs to specific arithmetic units on hardware devices.

Used in: Device

message RunEnvironment

op_stats.proto:77

The run environment of a profiling session.

Used in: OpStats

message SampledInferenceStatsProto

inference_stats.proto:297

Used in: InferenceStats

message SampledPerModelInferenceStatsProto

inference_stats.proto:292

Used in: SampledInferenceStatsProto

message SourceInfo

source_info.proto:5

Used in: HeapObject, OpMetrics, hlo_stats.HloStatsRecord, op_profile.Node.XLAInstruction, roofline_model.RooflineModelRecord

message SparseCoreStepBreakdown

steps_db.proto:93

Breakdown of step-time on SparseCore.

message SparseCoreStepSummary

tpu_input_pipeline.proto:107

Similar to TpuStepTimeBreakdown, this is for sparse core step time info.

Used in: TpuStepTimeBreakdown

message StepBreakdownEvents

pod_stats.proto:7

Used in: PodStatsDatabase, PodViewerDatabase

message StepDatabaseResult

steps_db.proto:183

Result proto for a StepDatabase.

Used in: OpStats

message StepInfoResult

steps_db.proto:120

Next ID: 7 Result proto for StepInfo.

Used in: PerCoreStepInfo

message StepSummary

input_pipeline.proto:42

Used for both step duration and Op duration.

Used in: GenericStepTimeBreakdown, InputPipelineAnalysisResult, SparseCoreStepSummary, TpuStepTimeBreakdown

message SystemTopology

op_stats.proto:66

System topology, which describes the number of chips in a pod and the connectivity style.

message Task

task.proto:10

'Task' contains information about a task that profiler traced.

Used in: Trace

message TensorEventDetail

inference_stats.proto:6

Used in: BatchDetail, RequestDetail

enum TensorEventDetail.TensorEventOwner

inference_stats.proto:11

The owner of this TensorEventDetail.

Used in: TensorEventDetail

message TensorPatternDatabase

inference_stats.proto:265

Tensor pattern database for all the tensor patterns that occurred during the profiling window.

Used in: InferenceStats

message TensorTransferAggregatedResult

inference_stats.proto:166

Per-model aggregated result of tensor transfer.

Used in: PerModelInferenceStats

message TensorTransferAggregatedResult.TensorPatternResult

inference_stats.proto:167

Used in: TensorTransferAggregatedResult

message TensorTransferAggregatedResult.TensorPatternResult.PercentileTime

inference_stats.proto:174

Used in: TensorPatternResult

message TfDataBottleneckAnalysis

tf_data_stats.proto:95

Used in: CombinedTfDataStats

message TfDataStats

tf_data_stats.proto:88

Collection of stats of tf.data input pipelines within a host.

Used in: CombinedTfDataStats

message TfFunction

tf_function.proto:44

Statistics for a tf-function.

Used in: TfFunctionDb

enum TfFunctionCompiler

tf_function.proto:21

All possible compilers that can be used to compile a tf-function in the graph mode.

Used in: TfFunction

message TfFunctionDb

tf_function.proto:58

Statistics for all tf-functions.

Used in: OpStats

enum TfFunctionExecutionMode

tf_function.proto:6

All possible execution modes of a tf-function.

message TfFunctionMetrics

tf_function.proto:36

Metrics associated with a particular execution mode of a tf-function.

Used in: TfFunction

message TfStatsDatabase

tf_stats.proto:8

A database of TfStatsTables.

message TfStatsRecord

tf_stats.proto:29

There is one TfStatsRecord for each TF operation profiled.

Used in: TfStatsTable

message TfStatsTable

tf_stats.proto:19

A table of TFStatsRecords plus the corresponding pprof keys.

Used in: TfStatsDatabase

message Topology

topology.proto:26

Topology of the system. Describes the number of chips and hosts and their connectivity.

Used in: RunEnvironment

message TopologyDimension

topology.proto:5

Used in: Topology

message TopologyLocation

topology.proto:11

Used in: PodViewerTopology, Topology

message TpuBottleneckAnalysis

tpu_input_pipeline.proto:121

message TpuStepBreakdown

steps_db.proto:24

Breakdown of step-time on TPU. Next ID: 20

message TpuStepTimeBreakdown

tpu_input_pipeline.proto:79

Next Id: 9

message TpuTraceData

trace_events_raw.proto:52

Used in: RawData

message Trace

trace_events.proto:55

A 'Trace' contains metadata for the individual traces of a system.

message TraceEvent

trace_events.proto:129

enum TraceEvent.EventType

trace_events.proto:130

Used in: TraceEvent

enum TraceEvent.FlowEntryType

trace_events.proto:185

Indicates the order of the event within a flow. Events with the same flow_id will appear in trace_viewer linked by arrows. For an arrow to be shown, at least the FLOW_START and FLOW_END must be present. There can be zero or more FLOW_MID events in the flow. Arrows are drawn from FLOW_START to FLOW_END and through each FLOW_MID event in timestamp order.

Used in: TraceEvent

message TraceEventArguments

trace_events_raw.proto:38

Generic trace event arguments.

Used in: RawData

message TraceEventArguments.Argument

trace_events_raw.proto:39

Used in: TraceEventArguments