package xla

Mouse Melon logoGet desktop application:
View/edit binary Protocol Buffers messages

service XlaService

xla_service.proto:49

/////////////////////// Global data requests

message BufferAllocationProto

hlo.proto:422

Serialization of BufferAllocation.

Used in: BufferAssignmentProto

message BufferAllocationProto.Assigned

hlo.proto:425

Assigned represents a single LogicalBuffer that is assigned to this BufferAllocation.

Used in: BufferAllocationProto

message BufferAssignmentProto

hlo.proto:487

Serialization of BufferAssignment.

Used in: HloProto

message BufferAssignmentProto.BufferAlias

hlo.proto:490

Alias represents a source LogicalBuffer, and the buffer location that aliases it.

Used in: BufferAssignmentProto

message ChannelHandle

xla_data.proto:332

Handle given to a user to represent a channel between two computations via a Send and Recv instruction pair. Channels are unbuffered, so Send Send instructions will be blocked until the data is transferred.

Used in: CreateChannelHandleResponse

enum ChannelHandle.ChannelType

xla_data.proto:334

Used in: ChannelHandle, CreateChannelHandleRequest

message CholeskyOptions

xla_data.proto:589

Used in: HloInstructionProto

message ComputationStats

xla_data.proto:240

Statistics of a computation.

Used in: ComputationStatsResponse

message ConvolutionDimensionNumbers

xla_data.proto:492

Used in: HloInstructionProto

message DebugOptions

xla.proto:25

Debugging options for XLA. These options may change at any time - there are no guarantees about backward or forward compatibility for these fields.

Used in: ComputationGraphStatsRequest, ExecutionOptions, xrt.XLAComputationConfig

enum DebugOptions.StepMarkerLocation

xla.proto:187

Used in: tensorflow.tpu.TPUCompileMetadataProto, DebugOptions

message DeviceAssignmentProto

xla_data.proto:355

DeviceAssignmentProto is a serialized form of DeviceAssignment class, which represents the device ids assigned to a set of replicated computations. See xla::DeviceAssignment class comment for more details.

Used in: tensorflow.tpu.TPUCompileMetadataProto, tpu_driver.ExecuteRequest, ExecutionOptions

message DeviceAssignmentProto.ComputationDevice

xla_data.proto:361

Each logical computation runs on replica_count physical devices. ComputationDevice represents the device ids assinged to the replicas.

Used in: DeviceAssignmentProto

message DeviceHandle

xla_data.proto:321

Handle given to a user that represents a replicated virtual device. Each replicated device represents N physical devices for execution where N is the number of replicas.

Used in: ExecutionOptions, GetDeviceHandlesResponse, ResetDeviceRequest, TransferFromOutfeedRequest, TransferToInfeedRequest, TransferToServerRequest

message DotDimensionNumbers

xla_data.proto:537

Used in: HloInstructionProto, gpu.GemmBackendConfig

message DynamicParameterBindingProto

hlo.proto:340

Used in: HloModuleProto

message DynamicParameterBindingProto.Binding

hlo.proto:364

A list of bindings which indicates that the `target_dim_num` in the subshape `target_param_index` of parameter `target_param_num` is a dynamic dimension and its real dynamic size is represented by `dynamic_param_index` in parameter `dynamic_param_num`. As an example, imagine we have a program: ENTRY main { a = f32[] parameter(0) b = f32[10] parameter(1) ROOT root = (f32[], f32[10]) tuple(%a, %b) } Let's say 'b' (param index 1) is a dynamic shape whose input has an upperbound of 10 and real size is determined at runtime.'a' represents the real size of b's first dimension. In this case, the fields are set in the following way: dynamic_param_num = 1 dynamic_param_index = {} target_param_num = 0 target_param_index = {} target_param_dim = 0

Used in: DynamicParameterBindingProto

message ExecuteGraphRequest

xla.proto:440

TODO(b/118493728): Remove this and ExecuteGraphParallelRequest and replace the uses with calls to Compile and Execute.

Used in: ExecuteGraphParallelRequest

message ExecuteResponse

xla.proto:452

Used as response type in: XlaService.Execute

Used as field type in: ExecuteParallelResponse

message ExecutionHandle

xla_data.proto:307

Handle given to a user that represents an execution that the user launched asynchronously on the device.

Used in: CompileResponse, ExecuteRequest, WaitForExecutionRequest

message ExecutionOptions

xla.proto:295

These settings control how XLA compiles and/or runs code. Not all settings will have an effect on every platform. When adding new fields, keep in mind that boolean fields default to false.

Used in: CompileRequest, ExecuteGraphRequest

message ExecutionProfile

xla_data.proto:274

Profile data from the execution of a computation.

Used in: ExecuteResponse, WaitForExecutionResponse

enum FftType

xla_data.proto:529

Used in: HloInstructionProto

enum Format

xla_data.proto:112

A format specifies the method used by a layout to store an array in memory.

Used in: LayoutProto

message FrontendAttributes

xla_data.proto:597

Generic map of attributes used to pass hints / configuration options from the Python frontend to the XLA backend.

Used in: HloInstructionProto

message GatherDimensionNumbers

xla_data.proto:451

Describes the dimension numbers for a gather operation. See https://www.tensorflow.org/performance/xla/operation_semantics#gather for more details.

Used in: HloInstructionProto

message GlobalDataHandle

xla_data.proto:314

Handle given to a user that represents a globally accessible allocation. Contrast this against a ComputationDataHandle, which is not globally accessible, since it only exists within a specific computation.

Used in: DeconstructTupleRequest, DeconstructTupleResponse, ExecuteGraphRequest, ExecuteRequest, ExecuteResponse, GetShapeRequest, LoadDataResponse, TransferToClientRequest, TransferToServerResponse, UnpackRequest, UnpackResponse, UnregisterRequest, WaitForExecutionResponse

message HeapSimulatorTrace

hlo.proto:445

A trace of a HeapSimulator run.

Used in: BufferAssignmentProto

message HeapSimulatorTrace.Event

hlo.proto:448

The trace includes a list of events, where each event describes one action performed by the heap simulator.

Used in: HeapSimulatorTrace

enum HeapSimulatorTrace.Event.Kind

hlo.proto:449

Used in: Event

message HloComputationProto

hlo.proto:268

Serialization of HloComputation.

Used in: HloModuleProto

message HloExecutionProfileData

hlo_execution_profile_data.proto:24

message HloInputOutputAliasProto

hlo.proto:300

Used in: HloModuleProto

message HloInputOutputAliasProto.AliasEntryProto

hlo.proto:326

The following proto describes a pair of aliased an input (described by parameter number and a ShapeIndex of the parameter) and an output (described by a ShapeIndex of the root instruction). For example: entry = { output_shape_index={1}, parameter_number=0, parameter_shape_index={1, 2}, } This entry indicates that the first paremter's {1, 2} element is aliased with the {1} element of the root instruction.

Used in: HloInputOutputAliasProto

enum HloInputOutputAliasProto.Kind

hlo.proto:301

Used in: AliasEntryProto

message HloInstructionProto

hlo.proto:40

Serialization of HloInstruction. Next ID: 72 NV Next ID: 1003

Used in: HloComputationProto, gpu.ConvInstructionLog

message HloInstructionProto.SliceDimensions

hlo.proto:96

Describes the [begin, end) index range and stride for slices.

Used in: HloInstructionProto

message HloModuleGroupProto

hlo.proto:481

An abstraction representing a set of HLO module built to run concurrently across different devices.

message HloModuleProto

hlo.proto:376

Serialization of HloModule.

Used in: CompileRequest, ComputationGraphStatsRequest, ComputeConstantGraphRequest, ExecuteGraphRequest, HloModuleGroupProto, HloProto

message HloProfilePrinterData

hlo_profile_printer_data.proto:24

Describes how to pretty-print a profile counter array gathered for a specific HloModule.

Used in: HloExecutionProfileData

message HloProfilePrinterData.HloComputationInfo

hlo_profile_printer_data.proto:43

Pretty-printer information about an HloComputation.

Used in: HloProfilePrinterData

message HloProfilePrinterData.HloInstructionInfo

hlo_profile_printer_data.proto:26

Pretty-printer information about an HloInstruction.

Used in: HloComputationInfo

message HloProto

hlo.proto:502

Grouping message that contains all of the information above.

Used in: tensorflow.tpu.CompilationResultProto, tpu_driver.CompileRequest, HloSnapshot

message HloScheduleProto

hlo.proto:291

Serialization of an HLO schedule. An HLO schedule contains a total order of instructions for each non-fusion computation in the module.

Used in: HloModuleProto

message HloScheduleProto.InstructionSequence

hlo.proto:292

Used in: HloScheduleProto

message HloSnapshot

hlo.proto:513

Encapsulates HloProto together with the arguments, result, and execution_platform. This message is used for purposes such as analysis/replay/file-storage.

Used in: xrt.XLAComputation

message LayoutProto

xla_data.proto:143

A layout describes how the array is placed in (1D) memory space. This includes the minor-to-major ordering of dimensions within a shape. Clients must specify the layouts of input Literals to the computation. Layouts specified in interior operations which take Shapes (for example, Convert) are ignored. See the XLA documentation for more information on shapes and layouts. LINT.IfChange

Used in: ComputeConstantGraphRequest, ShapeProto

message LiteralProto

xla_data.proto:373

Literals are used when the server and client need to exchange materialized data / results. Literals are also used to describe constants used in computations. Transfers to/from the client are encoded in literal form, and the structure of the repeated fields is implied by the shape.

Used in: ComputeConstantResponse, HloInstructionProto, HloSnapshot, TransferFromOutfeedResponse, TransferToClientResponse, TransferToInfeedRequest, TransferToServerRequest, xrt.XLAAllocation

message LogicalBufferProto

hlo.proto:401

Serialization of LogicalBuffer.

Used in: BufferAssignmentProto

message LogicalBufferProto.Location

hlo.proto:404

Location represents an instruction and its shape index, which uniquely identifies a point where a buffer is needed.

Used in: BufferAssignmentProto.BufferAlias, LogicalBufferProto

message OpMetadata

xla_data.proto:252

Symbolization metadata for HLO Instructions. This metadata is used for debugging XLA code generation, as well as performance profiling of XLA-generated executables.

Used in: HloInstructionProto

message OpSharding

xla_data.proto:601

Used in: tensorflow.tpu.TPUCompileMetadataProto.Arg, tensorflow.tpu.TPUCompileMetadataProto.Retval, HloInstructionProto

enum OpSharding.Type

xla_data.proto:602

Used in: OpSharding

message PaddingConfig

xla_data.proto:93

Describes the padding configuration for Pad operation. The padding amount on both edges as well as between the elements are specified for each dimension.

Used in: HloInstructionProto

message PaddingConfig.PaddingConfigDimension

xla_data.proto:95

Describes the padding configuration for a dimension.

Used in: PaddingConfig

message ParameterReplication

xla_data.proto:663

Describes whether all data-parallelism replicas will receive the same parameter data at each buffer.

Used in: HloInstructionProto

message PrecisionConfig

xla_data.proto:648

Used to indicate the precision configuration. It has backend specific meaning.

Used in: HloInstructionProto

enum PrecisionConfig.Precision

xla_data.proto:649

Used in: PrecisionConfig

enum PrimitiveType

xla_data.proto:27

Primitive types are the individual values that can be held in rectangular multidimensional arrays. A description of the rectangular multidimensional array dimensions / primitive type is given by Shape, below. LINT.IfChange

Used in: ShapeProto

message ProgramShapeProto

xla_data.proto:233

Shape of the parameters and output of a computation (like a traditional function signature).

Used in: tpu_driver.CompiledProgramMetadata, HloComputationProto, HloModuleProto, xrt.XLAComputationConfig

enum RandomAlgorithm

xla_data.proto:562

Used in: HloInstructionProto

enum RandomDistribution

xla_data.proto:548

Used in: HloInstructionProto

message ReplicaGroup

xla_data.proto:634

Describes the replica groups in a cross replica op (e.g., all-reduce and all-to-all).

Used in: HloInstructionProto

message ScatterDimensionNumbers

xla_data.proto:482

Describes the dimension numbers for a scatter operation. All the fields are similar to the corresponding fields in GatherDimensionNumbers. Differences are noted below.

Used in: HloInstructionProto

message ShapeProto

xla_data.proto:195

A shape describes the number of dimensions in the array, the size of each dimension, and the primitive component type. Tuples are a special case in that they have rank zero and have tuple_shapes defined. See the XLA documentation for more information on shapes and layouts. LINT.IfChange

Used in: tpu_driver.AllocateRequest, CompileRequest, ExecutionOptions, GetShapeResponse, HloInstructionProto, LiteralProto, LoadDataRequest, LoadDataResponse, OpSharding, ProgramShapeProto, TransferFromOutfeedRequest, TransferToClientRequest, gpu.ConvInstructionLog

message SourceTarget

xla_data.proto:641

Describes the source target pair in the collective permute op.

Used in: HloInstructionProto

message TileProto

xla_data.proto:125

Describes a tile used in tiling-based layout. Refer to g3doc/third_party/tensorflow/compiler/xla/g3doc/layout_with_tiling.md for details about tiling-based layout.

Used in: LayoutProto

message TriangularSolveOptions

xla_data.proto:569

Used in: HloInstructionProto

enum TriangularSolveOptions.Transpose

xla_data.proto:580

Should we transpose or use the adjoint of 'a'?

Used in: TriangularSolveOptions

message WhileLoopBackendConfig

xla_data.proto:681

A backend-config for kWhile loops that stores the loop's trip count, if it is known. This is useful for backends that can implement a `for i in 0..N` loop more efficiently than a `while` loop. For example, on GPUs, we can implement a `for i in 0..N` loop by enqueueing the kernels for the loop body N times, whereas implementing a `while` loop requires a host-device sync on each iteration.

message WhileLoopBackendConfig.KnownTripCount

xla_data.proto:682

Used in: WhileLoopBackendConfig

message Window

xla_data.proto:443

Describes the windowing in an operation such as convolution. The window is moved across a base area and for each position of the window a computation is performed. The field below describes the window and the movement of the window across a base area.

Used in: HloInstructionProto

message WindowDimension

xla_data.proto:396

Used in: Window