package build.bazel.remote.execution.v2

Mouse Melon logoGet desktop application:
View/edit binary Protocol Buffers messages

service ActionCache

remote_execution.proto:157

The action cache API is used to query whether a given action has already been performed and, if so, retrieve its result. Unlike the [ContentAddressableStorage][build.bazel.remote.execution.v2.ContentAddressableStorage], which addresses blobs by their own content, the action cache addresses the [ActionResult][build.bazel.remote.execution.v2.ActionResult] by a digest of the encoded [Action][build.bazel.remote.execution.v2.Action] which produced them. The lifetime of entries in the action cache is implementation-specific, but the server SHOULD assume that more recently used entries are more likely to be used again. As with other services in the Remote Execution API, any call may return an error with a [RetryInfo][google.rpc.RetryInfo] error detail providing information about when the client should retry the request; clients SHOULD respect the information provided.

service Capabilities

remote_execution.proto:440

The Capabilities service may be used by remote execution clients to query various server properties, in order to self-configure or return meaningful error messages. The query may include a particular `instance_name`, in which case the values returned will pertain to that instance.

service ContentAddressableStorage

remote_execution.proto:341

The CAS (content-addressable storage) is used to store the inputs to and outputs from the execution service. Each piece of content is addressed by the digest of its binary data. Most of the binary data stored in the CAS is opaque to the execution engine, and is only used as a communication medium. In order to build an [Action][build.bazel.remote.execution.v2.Action], however, the client will need to also upload the [Command][build.bazel.remote.execution.v2.Command] and input root [Directory][build.bazel.remote.execution.v2.Directory] for the Action. The Command and Directory messages must be marshalled to wire format and then uploaded under the hash as with any other piece of content. In practice, the input root directory is likely to refer to other Directories in its hierarchy, which must also each be uploaded on their own. For small file uploads the client should group them together and call [BatchUpdateBlobs][build.bazel.remote.execution.v2.ContentAddressableStorage.BatchUpdateBlobs]. For large uploads, the client must use the [Write method][google.bytestream.ByteStream.Write] of the ByteStream API. For uncompressed data, The `WriteRequest.resource_name` is of the following form: `{instance_name}/uploads/{uuid}/blobs/{digest_function/}{hash}/{size}{/optional_metadata}` Where: * `instance_name` is an identifier used to distinguish between the various instances on the server. Syntax and semantics of this field are defined by the server; Clients must not make any assumptions about it (e.g., whether it spans multiple path segments or not). If it is the empty path, the leading slash is omitted, so that the `resource_name` becomes `uploads/{uuid}/blobs/{digest_function/}{hash}/{size}{/optional_metadata}`. To simplify parsing, a path segment cannot equal any of the following keywords: `blobs`, `uploads`, `actions`, `actionResults`, `operations`, `capabilities` or `compressed-blobs`. * `uuid` is a version 4 UUID generated by the client, used to avoid collisions between concurrent uploads of the same data. Clients MAY reuse the same `uuid` for uploading different blobs. * `digest_function` is a lowercase string form of a `DigestFunction.Value` enum, indicating which digest function was used to compute `hash`. If the digest function used is one of MD5, MURMUR3, SHA1, SHA256, SHA384, SHA512, or VSO, this component MUST be omitted. In that case the server SHOULD infer the digest function using the length of the `hash` and the digest functions announced in the server's capabilities. * `hash` and `size` refer to the [Digest][build.bazel.remote.execution.v2.Digest] of the data being uploaded. * `optional_metadata` is implementation specific data, which clients MAY omit. Servers MAY ignore this metadata. Data can alternatively be uploaded in compressed form, with the following `WriteRequest.resource_name` form: `{instance_name}/uploads/{uuid}/compressed-blobs/{compressor}/{digest_function/}{uncompressed_hash}/{uncompressed_size}{/optional_metadata}` Where: * `instance_name`, `uuid`, `digest_function` and `optional_metadata` are defined as above. * `compressor` is a lowercase string form of a `Compressor.Value` enum other than `identity`, which is supported by the server and advertised in [CacheCapabilities.supported_compressor][build.bazel.remote.execution.v2.CacheCapabilities.supported_compressor]. * `uncompressed_hash` and `uncompressed_size` refer to the [Digest][build.bazel.remote.execution.v2.Digest] of the data being uploaded, once uncompressed. Servers MUST verify that these match the uploaded data once uncompressed, and MUST return an `INVALID_ARGUMENT` error in the case of mismatch. Note that when writing compressed blobs, the `WriteRequest.write_offset` in the initial request in a stream refers to the offset in the uncompressed form of the blob. In subsequent requests, `WriteRequest.write_offset` MUST be the sum of the first request's 'WriteRequest.write_offset' and the total size of all the compressed data bundles in the previous requests. Note that this mixes an uncompressed offset with a compressed byte length, which is nonsensical, but it is done to fit the semantics of the existing ByteStream protocol. Uploads of the same data MAY occur concurrently in any form, compressed or uncompressed. Clients SHOULD NOT use gRPC-level compression for ByteStream API `Write` calls of compressed blobs, since this would compress already-compressed data. When attempting an upload, if another client has already completed the upload (which may occur in the middle of a single upload if another client uploads the same blob concurrently), the request will terminate immediately without error, and with a response whose `committed_size` is the value `-1` if this is a compressed upload, or with the full size of the uploaded file if this is an uncompressed upload (regardless of how much data was transmitted by the client). If the client completes the upload but the [Digest][build.bazel.remote.execution.v2.Digest] does not match, an `INVALID_ARGUMENT` error will be returned. In either case, the client should not attempt to retry the upload. Small downloads can be grouped and requested in a batch via [BatchReadBlobs][build.bazel.remote.execution.v2.ContentAddressableStorage.BatchReadBlobs]. For large downloads, the client must use the [Read method][google.bytestream.ByteStream.Read] of the ByteStream API. For uncompressed data, The `ReadRequest.resource_name` is of the following form: `{instance_name}/blobs/{digest_function/}{hash}/{size}` Where `instance_name`, `digest_function`, `hash` and `size` are defined as for uploads. Data can alternatively be downloaded in compressed form, with the following `ReadRequest.resource_name` form: `{instance_name}/compressed-blobs/{compressor}/{digest_function/}{uncompressed_hash}/{uncompressed_size}` Where: * `instance_name`, `compressor` and `digest_function` are defined as for uploads. * `uncompressed_hash` and `uncompressed_size` refer to the [Digest][build.bazel.remote.execution.v2.Digest] of the data being downloaded, once uncompressed. Clients MUST verify that these match the downloaded data once uncompressed, and take appropriate steps in the case of failure such as retrying a limited number of times or surfacing an error to the user. When downloading compressed blobs: * `ReadRequest.read_offset` refers to the offset in the uncompressed form of the blob. * Servers MUST return `INVALID_ARGUMENT` if `ReadRequest.read_limit` is non-zero. * Servers MAY use any compression level they choose, including different levels for different blobs (e.g. choosing a level designed for maximum speed for data known to be incompressible). * Clients SHOULD NOT use gRPC-level compression, since this would compress already-compressed data. Servers MUST be able to provide data for all recently advertised blobs in each of the compression formats that the server supports, as well as in uncompressed form. The lifetime of entries in the CAS is implementation specific, but it SHOULD be long enough to allow for newly-added and recently looked-up entries to be used in subsequent calls (e.g. to [Execute][build.bazel.remote.execution.v2.Execution.Execute]). Servers MUST behave as though empty blobs are always available, even if they have not been uploaded. Clients MAY optimize away the uploading or downloading of empty blobs. As with other services in the Remote Execution API, any call may return an error with a [RetryInfo][google.rpc.RetryInfo] error detail providing information about when the client should retry the request; clients SHOULD respect the information provided.

service Execution

remote_execution.proto:44

The Remote Execution API is used to execute an [Action][build.bazel.remote.execution.v2.Action] on the remote workers. As with other services in the Remote Execution API, any call may return an error with a [RetryInfo][google.rpc.RetryInfo] error detail providing information about when the client should retry the request; clients SHOULD respect the information provided.

message Action

remote_execution.proto:479

An `Action` captures all the information about an execution which is required to reproduce it. `Action`s are the core component of the [Execution] service. A single `Action` represents a repeatable action that can be performed by the execution service. `Action`s can be succinctly identified by the digest of their wire format encoding and, once an `Action` has been executed, will be cached in the action cache. Future requests can then use the cached result rather than needing to run afresh. When a server completes execution of an [Action][build.bazel.remote.execution.v2.Action], it MAY choose to cache the [result][build.bazel.remote.execution.v2.ActionResult] in the [ActionCache][build.bazel.remote.execution.v2.ActionCache] unless `do_not_cache` is `true`. Clients SHOULD expect the server to do so. By default, future calls to [Execute][build.bazel.remote.execution.v2.Execution.Execute] the same `Action` will also serve their results from the cache. Clients must take care to understand the caching behaviour. Ideally, all `Action`s will be reproducible so that serving a result from cache is always desirable and correct.

message ActionCacheUpdateCapabilities

remote_execution.proto:1908

Describes the server/instance capabilities for updating the action cache.

Used in: CacheCapabilities

message ActionResult

remote_execution.proto:1028

An ActionResult represents the result of an [Action][build.bazel.remote.execution.v2.Action] being run. It is advised that at least one field (for example `ActionResult.execution_metadata.Worker`) have a non-default value, to ensure that the serialized value is non-empty, which can then be used as a basic data sanity check.

Used as response type in: ActionCache.GetActionResult, ActionCache.UpdateActionResult

Used as field type in: ExecuteResponse, UpdateActionResultRequest, com.github.trace_machina.nativelink.events.ResponseEvent

message BatchReadBlobsRequest

remote_execution.proto:1679

A request message for [ContentAddressableStorage.BatchReadBlobs][build.bazel.remote.execution.v2.ContentAddressableStorage.BatchReadBlobs].

Used as request type in: ContentAddressableStorage.BatchReadBlobs

Used as field type in: com.github.trace_machina.nativelink.events.RequestEvent

message BatchReadBlobsResponse.Response

remote_execution.proto:1709

A response corresponding to a single blob that the client tried to download.

Used in: BatchReadBlobsResponse

message BatchUpdateBlobsRequest.Request

remote_execution.proto:1625

A request corresponding to a single blob that the client wants to upload.

Used in: BatchUpdateBlobsRequest

message BatchUpdateBlobsResponse

remote_execution.proto:1663

A response message for [ContentAddressableStorage.BatchUpdateBlobs][build.bazel.remote.execution.v2.ContentAddressableStorage.BatchUpdateBlobs].

Used as response type in: ContentAddressableStorage.BatchUpdateBlobs

Used as field type in: com.github.trace_machina.nativelink.events.ResponseEvent

message BatchUpdateBlobsResponse.Response

remote_execution.proto:1665

A response corresponding to a single blob that the client tried to upload.

Used in: BatchUpdateBlobsResponse

message CacheCapabilities

remote_execution.proto:1971

Capabilities of the remote cache system.

Used in: ServerCapabilities

message Command

remote_execution.proto:554

A `Command` is the actual command executed by a worker running an [Action][build.bazel.remote.execution.v2.Action] and specifications of its environment. Except as otherwise required, the environment (such as which system libraries or binaries are available, and what filesystems are mounted where) is defined by and specific to the implementation of the remote execution API.

message Command.EnvironmentVariable

remote_execution.proto:557

An `EnvironmentVariable` is one variable to set in the running program's environment.

Used in: Command

message Compressor

remote_execution.proto:1948

Compression formats which may be supported.

(message has no fields)

enum Compressor.Value

remote_execution.proto:1949

Used in: BatchReadBlobsRequest, BatchReadBlobsResponse.Response, BatchUpdateBlobsRequest.Request, CacheCapabilities, com.github.trace_machina.nativelink.events.BatchReadBlobsResponseOverride.Response, com.github.trace_machina.nativelink.events.BatchUpdateBlobsRequestOverride.Request

message Digest

remote_execution.proto:955

A content digest. A digest for a given blob consists of the size of the blob and its hash. The hash algorithm to use is defined by the server. The size is considered to be an integral part of the digest and cannot be separated. That is, even if the `hash` field is correctly specified but `size_bytes` is not, the server MUST reject the request. The reason for including the size in the digest is as follows: in a great many cases, the server needs to know the size of the blob it is about to work with prior to starting an operation with it, such as flattening Merkle tree structures or streaming it to a worker. Technically, the server could implement a separate metadata store, but this results in a significantly more complicated implementation as opposed to having the client specify the size up-front (or storing the size along with the digest in every message where digests are embedded). This does mean that the API leaks some implementation details of (what we consider to be) a reasonable server implementation, but we consider this to be a worthwhile tradeoff. When a `Digest` is used to refer to a proto message, it always refers to the message in binary encoded form. To ensure consistent hashing, clients and servers MUST ensure that they serialize messages according to the following rules, even if there are alternate valid encodings for the same message: * Fields are serialized in tag order. * There are no unknown fields. * There are no duplicate fields. * Fields are serialized according to the default semantics for their type. Most protocol buffer implementations will always follow these rules when serializing, but care should be taken to avoid shortcuts. For instance, concatenating two messages to merge them may produce duplicate fields.

Used in: asset.v1.FetchBlobResponse, asset.v1.FetchDirectoryResponse, asset.v1.PushBlobRequest, asset.v1.PushDirectoryRequest, Action, ActionResult, BatchReadBlobsRequest, BatchReadBlobsResponse.Response, BatchUpdateBlobsRequest.Request, BatchUpdateBlobsResponse.Response, DirectoryNode, ExecuteOperationMetadata, ExecuteRequest, FileNode, FindMissingBlobsRequest, FindMissingBlobsResponse, GetActionResultRequest, GetTreeRequest, LogFile, OutputDirectory, OutputFile, UpdateActionResultRequest, com.github.trace_machina.nativelink.events.BatchReadBlobsResponseOverride.Response, com.github.trace_machina.nativelink.events.BatchUpdateBlobsRequestOverride.Request, com.github.trace_machina.nativelink.remote_execution.HistoricalExecuteResponse

message DigestFunction

remote_execution.proto:1812

The digest function used for converting values into keys for CAS and Action Cache.

(message has no fields)

enum DigestFunction.Value

remote_execution.proto:1813

Used in: asset.v1.FetchBlobRequest, asset.v1.FetchBlobResponse, asset.v1.FetchDirectoryRequest, asset.v1.FetchDirectoryResponse, asset.v1.PushBlobRequest, asset.v1.PushDirectoryRequest, BatchReadBlobsRequest, BatchUpdateBlobsRequest, CacheCapabilities, ExecuteRequest, ExecutionCapabilities, FindMissingBlobsRequest, GetActionResultRequest, GetTreeRequest, UpdateActionResultRequest, com.github.trace_machina.nativelink.events.BatchUpdateBlobsRequestOverride

message Directory

remote_execution.proto:826

A `Directory` represents a directory node in a file tree, containing zero or more children [FileNodes][build.bazel.remote.execution.v2.FileNode], [DirectoryNodes][build.bazel.remote.execution.v2.DirectoryNode] and [SymlinkNodes][build.bazel.remote.execution.v2.SymlinkNode]. Each `Node` contains its name in the directory, either the digest of its content (either a file blob or a `Directory` proto) or a symlink target, as well as possibly some metadata about the file or directory. In order to ensure that two equivalent directory trees hash to the same value, the following restrictions MUST be obeyed when constructing a a `Directory`: * Every child in the directory must have a path of exactly one segment. Multiple levels of directory hierarchy may not be collapsed. * Each child in the directory must have a unique path segment (file name). Note that while the API itself is case-sensitive, the environment where the Action is executed may or may not be case-sensitive. That is, it is legal to call the API with a Directory that has both "Foo" and "foo" as children, but the Action may be rejected by the remote system upon execution. * The files, directories and symlinks in the directory must each be sorted in lexicographical order by path. The path strings must be sorted by code point, equivalently, by UTF-8 bytes. * The [NodeProperties][build.bazel.remote.execution.v2.NodeProperty] of files, directories, and symlinks must be sorted in lexicographical order by property name. A `Directory` that obeys the restrictions is said to be in canonical form. As an example, the following could be used for a file named `bar` and a directory named `foo` with an executable file named `baz` (hashes shortened for readability): ```json // (Directory proto) { files: [ { name: "bar", digest: { hash: "4a73bc9d03...", size: 65534 }, node_properties: [ { "name": "MTime", "value": "2017-01-15T01:30:15.01Z" } ] } ], directories: [ { name: "foo", digest: { hash: "4cf2eda940...", size: 43 } } ] } // (Directory proto with hash "4cf2eda940..." and size 43) { files: [ { name: "baz", digest: { hash: "b2c941073e...", size: 1294, }, is_executable: true } ] } ```

Used in: GetTreeResponse, Tree

message DirectoryNode

remote_execution.proto:892

A `DirectoryNode` represents a child of a [Directory][build.bazel.remote.execution.v2.Directory] which is itself a `Directory` and its associated metadata.

Used in: Directory

message ExecuteOperationMetadata

remote_execution.proto:1488

Metadata about an ongoing [execution][build.bazel.remote.execution.v2.Execution.Execute], which will be contained in the [metadata field][google.longrunning.Operation.response] of the [Operation][google.longrunning.Operation].

message ExecuteRequest

remote_execution.proto:1353

A request message for [Execution.Execute][build.bazel.remote.execution.v2.Execution.Execute].

Used as request type in: Execution.Execute

Used as field type in: com.github.trace_machina.nativelink.events.RequestEvent, com.github.trace_machina.nativelink.remote_execution.StartExecute

message ExecuteResponse

remote_execution.proto:1419

The response message for [Execution.Execute][build.bazel.remote.execution.v2.Execution.Execute], which will be contained in the [response field][google.longrunning.Operation.response] of the [Operation][google.longrunning.Operation].

Used in: com.github.trace_machina.nativelink.remote_execution.ExecuteResult, com.github.trace_machina.nativelink.remote_execution.HistoricalExecuteResponse

message ExecutedActionMetadata

remote_execution.proto:965

ExecutedActionMetadata contains details about a completed execution.

Used in: ActionResult, ExecuteOperationMetadata

message ExecutionCapabilities

remote_execution.proto:2006

Capabilities of the remote execution system.

Used in: ServerCapabilities

message ExecutionPolicy

remote_execution.proto:1324

An `ExecutionPolicy` can be used to control the scheduling of the action.

Used in: ExecuteRequest

message ExecutionStage

remote_execution.proto:1464

The current stage of action execution. Even though these stages are numbered according to the order in which they generally occur, there is no requirement that the remote execution system reports events along this order. For example, an operation MAY transition from the EXECUTING stage back to QUEUED in case the hardware on which the operation executes fails. If and only if the remote execution system reports that an operation has reached the COMPLETED stage, it MUST set the [done field][google.longrunning.Operation.done] of the [Operation][google.longrunning.Operation] and terminate the stream.

(message has no fields)

enum ExecutionStage.Value

remote_execution.proto:1465

Used in: ExecuteOperationMetadata

message FileNode

remote_execution.proto:872

A `FileNode` represents a single file and associated metadata.

Used in: Directory

message FindMissingBlobsRequest

remote_execution.proto:1592

A request message for [ContentAddressableStorage.FindMissingBlobs][build.bazel.remote.execution.v2.ContentAddressableStorage.FindMissingBlobs].

Used as request type in: ContentAddressableStorage.FindMissingBlobs

Used as field type in: com.github.trace_machina.nativelink.events.RequestEvent

message FindMissingBlobsResponse

remote_execution.proto:1616

A response message for [ContentAddressableStorage.FindMissingBlobs][build.bazel.remote.execution.v2.ContentAddressableStorage.FindMissingBlobs].

Used as response type in: ContentAddressableStorage.FindMissingBlobs

Used as field type in: com.github.trace_machina.nativelink.events.ResponseEvent

message GetActionResultRequest

remote_execution.proto:1521

A request message for [ActionCache.GetActionResult][build.bazel.remote.execution.v2.ActionCache.GetActionResult].

Used as request type in: ActionCache.GetActionResult

Used as field type in: com.github.trace_machina.nativelink.events.RequestEvent

message GetCapabilitiesRequest

remote_execution.proto:1782

A request message for [Capabilities.GetCapabilities][build.bazel.remote.execution.v2.Capabilities.GetCapabilities].

Used as request type in: Capabilities.GetCapabilities

Used as field type in: com.github.trace_machina.nativelink.events.RequestEvent

message GetTreeRequest

remote_execution.proto:1730

A request message for [ContentAddressableStorage.GetTree][build.bazel.remote.execution.v2.ContentAddressableStorage.GetTree].

Used as request type in: ContentAddressableStorage.GetTree

Used as field type in: com.github.trace_machina.nativelink.events.RequestEvent

message GetTreeResponse

remote_execution.proto:1769

A response message for [ContentAddressableStorage.GetTree][build.bazel.remote.execution.v2.ContentAddressableStorage.GetTree].

Used as response type in: ContentAddressableStorage.GetTree

Used as field type in: com.github.trace_machina.nativelink.events.StreamEvent

message LogFile

remote_execution.proto:1402

A `LogFile` is a log stored in the CAS.

Used in: ExecuteResponse

message NodeProperties

remote_execution.proto:859

Node properties for [FileNodes][build.bazel.remote.execution.v2.FileNode], [DirectoryNodes][build.bazel.remote.execution.v2.DirectoryNode], and [SymlinkNodes][build.bazel.remote.execution.v2.SymlinkNode]. The server is responsible for specifying the properties that it accepts.

Used in: Directory, FileNode, OutputFile, OutputSymlink, SymlinkNode

message NodeProperty

remote_execution.proto:846

A single property for [FileNodes][build.bazel.remote.execution.v2.FileNode], [DirectoryNodes][build.bazel.remote.execution.v2.DirectoryNode], and [SymlinkNodes][build.bazel.remote.execution.v2.SymlinkNode]. The server is responsible for specifying the property `name`s that it accepts. If permitted by the server, the same `name` may occur multiple times.

Used in: NodeProperties

message OutputDirectory

remote_execution.proto:1247

An `OutputDirectory` is the output in an `ActionResult` corresponding to a directory's full contents rather than a single file.

Used in: ActionResult

message OutputFile

remote_execution.proto:1201

An `OutputFile` is similar to a [FileNode][build.bazel.remote.execution.v2.FileNode], but it is used as an output in an `ActionResult`. It allows a full file path rather than only a name.

Used in: ActionResult

remote_execution.proto:1304

An `OutputSymlink` is similar to a [Symlink][build.bazel.remote.execution.v2.SymlinkNode], but it is used as an output in an `ActionResult`. `OutputSymlink` is binary-compatible with `SymlinkNode`.

Used in: ActionResult

message Platform

remote_execution.proto:712

A `Platform` is a set of requirements, such as hardware, operating system, or compiler toolchain, for an [Action][build.bazel.remote.execution.v2.Action]'s execution environment. A `Platform` is represented as a series of key-value pairs representing the properties that are required of the platform.

Used in: Action, Command, com.github.trace_machina.nativelink.remote_execution.StartExecute

message Platform.Property

remote_execution.proto:735

A single property for the environment. The server is responsible for specifying the property `name`s that it accepts. If an unknown `name` is provided in the requirements for an [Action][build.bazel.remote.execution.v2.Action], the server SHOULD reject the execution request. If permitted by the server, the same `name` may occur multiple times. The server is also responsible for specifying the interpretation of property `value`s. For instance, a property describing how much RAM must be available may be interpreted as allowing a worker with 16GB to fulfill a request for 8GB, while a property describing the OS environment on which the action must be performed may require an exact match with the worker's OS. The server MAY use the `value` of one or more properties to determine how it sets up the execution environment, such as by making specific system files available to the worker. Both names and values are typically case-sensitive. Note that the platform is implicitly part of the action digest, so even tiny changes in the names or values (like changing case) may result in different action cache entries.

Used in: Platform, com.github.trace_machina.nativelink.remote_execution.ConnectWorkerRequest

message PriorityCapabilities

remote_execution.proto:1916

Allowed values for priority in [ResultsCachePolicy][build.bazel.remoteexecution.v2.ResultsCachePolicy] and [ExecutionPolicy][build.bazel.remoteexecution.v2.ResultsCachePolicy] Used for querying both cache and execution valid priority ranges.

Used in: CacheCapabilities, ExecutionCapabilities

message PriorityCapabilities.PriorityRange

remote_execution.proto:1918

Supported range of priorities, including boundaries.

Used in: PriorityCapabilities

message RequestMetadata

remote_execution.proto:2059

An optional Metadata to attach to any RPC request to tell the server about an external context of the request. The server may use this for logging or other purposes. To use it, the client attaches the header to the call using the canonical proto serialization: * name: `build.bazel.remote.execution.v2.requestmetadata-bin` * contents: the base64 encoded binary `RequestMetadata` message. Note: the gRPC library serializes binary headers encoded in base64 by default (https://github.com/grpc/grpc/blob/master/doc/PROTOCOL-HTTP2.md#requests). Therefore, if the gRPC library is used to pass/retrieve this metadata, the user may ignore the base64 encoding and assume it is simply serialized as a binary message.

Used in: com.github.trace_machina.nativelink.events.OriginEvent

message ResultsCachePolicy

remote_execution.proto:1339

A `ResultsCachePolicy` is used for fine-grained control over how action outputs are stored in the CAS and Action Cache.

Used in: ExecuteRequest, UpdateActionResultRequest

message ServerCapabilities

remote_execution.proto:1793

A response message for [Capabilities.GetCapabilities][build.bazel.remote.execution.v2.Capabilities.GetCapabilities].

Used as response type in: Capabilities.GetCapabilities

Used as field type in: com.github.trace_machina.nativelink.events.ResponseEvent

message SymlinkAbsolutePathStrategy

remote_execution.proto:1930

Describes how the server treats absolute symlink targets.

(message has no fields)

enum SymlinkAbsolutePathStrategy.Value

remote_execution.proto:1931

Used in: CacheCapabilities

message SymlinkNode

remote_execution.proto:904

A `SymlinkNode` represents a symbolic link.

Used in: Directory

message ToolDetails

remote_execution.proto:2038

Details for the tool used to call the API.

Used in: RequestMetadata

message Tree

remote_execution.proto:1231

A `Tree` contains all the [Directory][build.bazel.remote.execution.v2.Directory] protos in a single directory Merkle tree, compressed into one message.

message UpdateActionResultRequest

remote_execution.proto:1559

A request message for [ActionCache.UpdateActionResult][build.bazel.remote.execution.v2.ActionCache.UpdateActionResult].

Used as request type in: ActionCache.UpdateActionResult

Used as field type in: com.github.trace_machina.nativelink.events.RequestEvent

message WaitExecutionRequest

remote_execution.proto:1513

A request message for [WaitExecution][build.bazel.remote.execution.v2.Execution.WaitExecution].

Used as request type in: Execution.WaitExecution

Used as field type in: com.github.trace_machina.nativelink.events.RequestEvent