Get desktop application:
View/edit binary Protocol Buffers messages
Used in:
, ,The BlobProtoVector is simply a way to pass multiple blobproto instances around.
the actual image data, in bytes
Optionally, the datum could also hold float data.
Used in:
The filler type.
the value in constant filler
the min value in uniform filler
the max value in uniform filler
the mean value in gaussian filler
the std value in gaussian filler
the snapshot file path in from_other filler
the layer name in from_other filler
the index of blob in the layer in from_other filler
the binary file to load weights from
Used in:
the type of the info
how long it is notified once
Used in:
the layer parameter
the name of the bottom blobs
the name of the top blobs
Used in:
the layer name
the string to specify the layer type
Parameters to specify layers with inner products.
The number of outputs for the layer
whether to have bias terms
The filler for the weight
The filler for the bias
The padding size
The kernel size
The group size for group conv
The stride
The pooling method
dropout ratio
for local response norm
for local response norm
for local response norm
For data layers, specify the data source
For data pre-processing, we can do simple scaling and subtracting the data mean, if provided. Note that the mean subtraction is always carried out before scaling.
For data layers, specify the batch size.
For data layers, specify if we would like to randomly crop an image.
For data layers, specify if we want to randomly mirror data.
The blobs containing the numeric parameters of the layer
The ratio that is multiplied on the global learning rate. If you want to set the learning ratio for one blob, you need to set it for all blobs.
The weight decay that is multiplied on the global weight decay.
Learning rate local policy
The rand_skip variable is for the data layer to skip a few data points to avoid all asynchronous sgd clients to start at the same point. The skip point would be set as rand_skip * rand(0,1). Note that rand_skip should not be larger than the number of keys in the leveldb.
mean on train and mean on test for dropout layer which is for debuging purpose
Concat Layer need to specify the dimension along the concat will happen, the other dimensions must be the same for all the bottom blobs By default it will concatenate blobs along channels dimension
Used in:
consider giving the network a name
a bunch of layers.
The input blobs to the network.
The dim of the input blobs. For each input blob there should be four values specifying the num, channels, height and width of the input blob. Thus, there should be a total of (4 * #input) numbers.
Whether the network will force every layer to carry out backward operation. If set False, then whether to carry out backward is determined automatically according to the net structure and learning rates.
The proto file for the training net.
The proto file for the testing net.
The number of iterations for each testing phase.
The number of iterations between two testing phases.
The base learning rate
the number of iterations between displaying info. If display = 0, no info will be displayed.
the maximum number of iterations
The learning rate decay policy.
The parameter to compute the learning rate.
The parameter to compute the learning rate.
The momentum value.
The weight decay.
the stepsize for learning rate policy "step"
The snapshot interval
The prefix for the snapshot.
whether to snapshot diff in the results or not. Snapshotting diff will help debugging but the final protocol buffer size will be much larger.
the mode solver will use: 0 for CPU and 1 for GPU. Use GPU in default.
the device_id will that be used in GPU mode. Use device_id=0 in default.
The infomation printer to print information of the net
A message that stores the solver snapshots
The current iteration
The file that stores the learned net.
The history for sgd solvers