Get desktop application:
View/edit binary Protocol Buffers messages
Message that stores parameters used by ArgMaxLayer
Used in:
If true produce pairs (argmax, maxval)
Used in:
, , ,The BlobProtoVector is simply a way to pass multiple blobproto instances around.
Message that stores parameters used by ConcatLayer
Used in:
Concat Layer needs to specify the dimension along the concat will happen, the other dimensions must be the same for all the bottom blobs By default it will concatenate blobs along channels dimension
Message that stores parameters used by ConvolutionLayer
Used in:
The number of outputs for the layer
whether to have bias terms
The padding size
The kernel size
The group size for group conv
The stride
The filler for the weight
The filler for the bias
Message that stores parameters used by DataLayer
Used in:
Specify the data source.
For data pre-processing, we can do simple scaling and subtracting the data mean, if provided. Note that the mean subtraction is always carried out before scaling.
Specify the batch size.
Specify if we would like to randomly crop an image.
Specify if we want to randomly mirror data.
The rand_skip variable is for the data layer to skip a few data points to avoid all asynchronous sgd clients to start at the same point. The skip point would be set as rand_skip * rand(0,1). Note that rand_skip should not be larger than the number of keys in the leveldb.
Used in:
the actual image data, in bytes
Optionally, the datum could also hold float data.
Message that stores parameters used by DropoutLayer
Used in:
dropout ratio
Message that stores parameters used by DummyDataLayer. DummyDataLayer fills any number of arbitrarily shaped blobs with random (or constant) data generated by "Fillers" (see "message FillerParameter").
Used in:
This layer produces N >= 1 top blobs. DummyDataParameter must specify 1 or N num, N channels, N height, and N width fields, and must specify 0, 1 or N data_fillers. If 0 data_fillers are specified, ConstantFiller with a value of 0 is used. If 1 data_filler is specified, it is applied to all top blobs. If N are specified, the ith is applied to the ith top blob.
Message that stores parameters used by EltwiseLayer
Used in:
element-wise operation
blob-wise coefficient for SUM operation
Used in:
Used in:
, , ,The filler type.
the value in constant filler
the min value in uniform filler
the max value in uniform filler
the mean value in Gaussian filler
the std value in Gaussian filler
The expected number of non-zero input weights for a given output in Gaussian filler -- the default -1 means don't perform sparsification.
Message that stores parameters used by HDF5DataLayer
Used in:
Specify the data source.
Specify the batch size.
Message that stores parameters used by HDF5OutputLayer
Used in:
,Used in:
Specify the Norm to use L1 or L2
Used in:
Message that stores parameters used by ImageDataLayer
Used in:
Specify the data source.
For data pre-processing, we can do simple scaling and subtracting the data mean, if provided. Note that the mean subtraction is always carried out before scaling.
Specify the batch size.
Specify if we would like to randomly crop an image.
Specify if we want to randomly mirror data.
The rand_skip variable is for the data layer to skip a few data points to avoid all asynchronous sgd clients to start at the same point. The skip point would be set as rand_skip * rand(0,1). Note that rand_skip should not be larger than the number of keys in the leveldb.
Whether or not ImageLayer should shuffle the list of files at every epoch.
It will also resize images if new_height or new_width are not zero.
By default assumes images are in color
Message that stores parameters InfogainLossLayer
Used in:
Specify the infogain matrix source.
Message that stores parameters used by InnerProductLayer
Used in:
The number of outputs for the layer
whether to have bias terms
The filler for the weight
The filler for the bias
Message that stores parameters used by LRNLayer
Used in:
Used in:
NOTE Update the next available ID when you add a new LayerParameter field. LayerParameter next available ID: 27 (last added: dummy_data_param)
Used in:
,the name of the bottom blobs
the name of the top blobs
the layer name
the layer type from the enum above
The blobs containing the numeric parameters of the layer
The ratio that is multiplied on the global learning rate. If you want to set the learning ratio for one blob, you need to set it for all blobs.
The weight decay that is multiplied on the global weight decay.
DEPRECATED: The layer parameters specified as a V0LayerParameter. This should never be used by any code except to upgrade to the new LayerParameter specification.
NOTE Add new LayerTypes to the enum below in lexicographical order (other than starting with NONE), starting with the next available ID in the comment line above the enum. Update the next available ID when you add a new LayerType. LayerType next available ID: 33 (last added: DUMMY_DATA)
Used in:
"NONE" layer type is 0th enum element so that we don't cause confusion by defaulting to an existent LayerType (instead, should usually error if the type is unspecified).
Message that stores parameters used by MemoryDataLayer
Used in:
Used in:
consider giving the network a name
a bunch of layers.
The input blobs to the network.
The dim of the input blobs. For each input blob there should be four values specifying the num, channels, height and width of the input blob. Thus, there should be a total of (4 * #input) numbers.
Whether the network will force every layer to carry out backward operation. If set False, then whether to carry out backward is determined automatically according to the net structure and learning rates.
A near-duplicate of NetParameter with fields re-numbered to beautify automatic prototext dumps. The main practical purpose is to print inputs before layers, because having inputs at the end looks weird. NetParameterPrettyPrint should never be used in code except for conversion FROM NetParameter and subsequent dumping to proto text file.
Message that stores parameters used by PoolingLayer
Used in:
The pooling method
The kernel size
The stride
The padding size -- currently implemented only for average and max pooling. average pooling zero pads. max pooling -inf pads.
Used in:
Message that stores parameters used by PowerLayer
Used in:
PowerLayer computes outputs y = (shift + scale * x) ^ power.
{train,test}_net specify a path to a file containing the {train,test} net parameters; {train,test}_net_param specify the net parameters directly inside the SolverParameter. Only either train_net or train_net_param (not both) should be specified. You may specify 0 or more test_net and/or test_net_param. All nets specified using test_net_param will be tested first, followed by all nets specified using test_net (each processed in the order specified in the prototxt).
The proto filename for the train net.
The proto filenames for the test nets.
Full params for the train net.
Full params for the test nets.
The number of iterations for each testing phase.
The number of iterations between two testing phases.
The base learning rate
the number of iterations between displaying info. If display = 0, no info will be displayed.
the maximum number of iterations
The learning rate decay policy.
The parameter to compute the learning rate.
The parameter to compute the learning rate.
The momentum value.
The weight decay.
the stepsize for learning rate policy "step"
The snapshot interval
The prefix for the snapshot.
whether to snapshot diff in the results or not. Snapshotting diff will help debugging but the final protocol buffer size will be much larger.
the device_id will that be used in GPU mode. Use device_id = 0 in default.
If non-negative, the seed with which the Solver will initialize the Caffe random number generator -- useful for reproducible results. Otherwise, (and by default) initialize using a seed derived from the system clock.
the mode solver will use: 0 for CPU and 1 for GPU. Use GPU in default.
Used in:
A message that stores the solver snapshots
The current iteration
The file that stores the learned net.
The history for sgd solvers
Message that stores parameters used by ThresholdLayer
Used in:
Strictly Positive values
DEPRECATED: V0LayerParameter is the old way of specifying layer parameters in Caffe. We keep this message type around for legacy support.
Used in:
the layer name
the string to specify the layer type
Parameters to specify layers with inner products.
The number of outputs for the layer
whether to have bias terms
The filler for the weight
The filler for the bias
The padding size
The kernel size
The group size for group conv
The stride
The pooling method
dropout ratio
for local response norm
for local response norm
for local response norm
For data layers, specify the data source
For data pre-processing, we can do simple scaling and subtracting the data mean, if provided. Note that the mean subtraction is always carried out before scaling.
For data layers, specify the batch size.
For data layers, specify if we would like to randomly crop an image.
For data layers, specify if we want to randomly mirror data.
The blobs containing the numeric parameters of the layer
The ratio that is multiplied on the global learning rate. If you want to set the learning ratio for one blob, you need to set it for all blobs.
The weight decay that is multiplied on the global weight decay.
The rand_skip variable is for the data layer to skip a few data points to avoid all asynchronous sgd clients to start at the same point. The skip point would be set as rand_skip * rand(0,1). Note that rand_skip should not be larger than the number of keys in the leveldb.
Fields related to detection (det_*) foreground (object) overlap threshold
background (non-object) overlap threshold
Fraction of batch that should be foreground objects
Amount of contextual padding to add around a window (used only by the window_data_layer)
Mode for cropping out a detection window warp: cropped window is warped to a fixed size and aspect ratio square: the tightest square around the window is cropped
For ReshapeLayer, one needs to specify the new dimensions.
Whether or not ImageLayer should shuffle the list of files at every epoch. It will also resize images if new_height or new_width are not zero.
For ConcatLayer, one needs to specify the dimension for concatenation, and the other dimensions must be the same for all the bottom blobs. By default it will concatenate blobs along the channels dimension.
Used in:
Message that stores parameters used by WindowDataLayer
Used in:
Specify the data source.
For data pre-processing, we can do simple scaling and subtracting the data mean, if provided. Note that the mean subtraction is always carried out before scaling.
Specify the batch size.
Specify if we would like to randomly crop an image.
Specify if we want to randomly mirror data.
Foreground (object) overlap threshold
Background (non-object) overlap threshold
Fraction of batch that should be foreground objects
Amount of contextual padding to add around a window (used only by the window_data_layer)
Mode for cropping out a detection window warp: cropped window is warped to a fixed size and aspect ratio square: the tightest square around the window is cropped