Get desktop application:
View/edit binary Protocol Buffers messages
Describes a kind of non-linearity (threshold-like mathematical function).
Used in:
Rectified linear activation: f(x) = x < 0 ? 0 : x
Rectified linear activation; where upper maximum is 6.0.
Rectified linear activation; where upper maximum specified by BatchDescriptor::value_max().
Like ReluX; but passes all values in the range [-X,X].
Generic algorithm representation.
Used in:
The GPU may operate 4x4 matrix FMA. See cuDNN's documentation for CUDNN_TENSOR_OP_MATH.
Convolution-specific parameters.
Used in:
The "accumulator" type. For example, use F32 as an accumulator for F16 convolutions. See cuDNN's cudnnConvolutionMode_t.
See cuDNN's group count.
Tensorflow node name, same as in NodeDef, for debugging purposes.
Used in:
Describe the math definition for the conv op. The popular behavior is actually called cross-correlation in math, despite the operation is often referred as convolution. See cuDNN cudnnConvolutionMode_t.
Used in:
Describes how a convolution input or output layer's data is formatted.
Used in:
Naming convention: Y <-> row or height X <-> column or width Batch <-> batch, or N Depth <-> feature, or channel TODO(timshen): turn them into cuDNN names, e.g. kNCHW.
cuDNN's NHWC layout
cuDNN's NCHW layout
cuDNN's NCHW_VECT_C layout
Specifies the data type used by an operation.
Used in:
,Describes how a convolution filter is laid out in the memory.
Used in:
Naming convention: Y <-> row or height X <-> column or width Output <-> output feature, or N Input <-> input feature, or N TODO(timshen): turn them into cuDNN names, e.g. kNCHW.
cuDNN's NCHW layout
cuDNN's NHWC layout
cuDNN's NCHW_VECT_C layout
Generic tensor representation.
Used in: