Proto commits in alexgkendall/caffe-segnet

These commits are when the Protocol Buffers files have changed: (only the last 100 relevant commits are shown)

Commit:1a2cfea
Author:Alex Kendall

Added basic data augmentation (horizontal mirroring and random cropping) to data loading layer.

The documentation is generated from this commit.

Commit:a226039
Author:Alex Kendall

Added support for Bayesian SegNet

Commit:3234883
Author:Alex Kendall

Add argmax axis param

Commit:6ddc802
Author:Alex Kendall

Added user-specified class balancing in softmax loss layer

Commit:1f22662
Author:Kesar Breen

Remove broken support for BN layer in V1LayerParameter (fixes test failure)

Commit:03d1e9f
Author:Kesar Breen
Committer:Kesar Breen

Add SegNet implementation Copyright (c) 2015, Kesar Breen and Alex Kendall. Subject to Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/legalcode)

Commit:823d055
Author:Jeff Donahue
Committer:Jeff Donahue

Add ReductionLayer to reduce any number of "tail" axes to a scalar value Currently implements operations SUM, MEAN, ASUM (sum of absolute values), and SUMSQ (sum of squares)

Commit:eb442b9
Author:Jeff Donahue
Committer:Jeff Donahue

FlattenLayer gets a FlattenParameter with an axis, end_axis

Commit:8c72fe3
Author:Jeff Donahue
Committer:Jeff Donahue

Add LogLayer

Commit:aeef453
Author:Evan Shelhamer

Merge pull request #1977 from shelhamer/accum-grad Decouple the computational batch size and minibatch size by accumulating gradients

Commit:8b05a02
Author:Jeff Donahue

Merge pull request #2410 from sguada/datum_transform Datum transform

Commit:41cf06c
Author:Jonathan L Long
Committer:Evan Shelhamer

zero-init param diffs and accumulate gradients (With layers whose backward accumulates gradients), this effectively decouples the computational batch from the SGD minibatch. Each iteration accumulates gradients over iter_size batches, then parameters are updated.

Commit:c255709
Author:Evan Shelhamer

Merge pull request #1946 from nickcarlevaris/msra_init Add MSRAFiller, an Xavier-like filler designed for use with ReLUs

Commit:65af68d
Author:Nick Carlevaris-Bianco
Committer:Evan Shelhamer

Added MSRAFiller, an Xavier-like filler designed for use with ReLUs ...instead of tanh. Based on paper: He et al, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification," 2015. - add VarianceNorm option to FillerParameters which allows one to normalize by fan_in, fan_out or their average. - update XavierFiller to use the VarianceNorm option (default behavior unchanged). - add tests for MSRAFiller and XavierFiller.

Commit:dbd8319
Author:Jonathan L Long

clean up redundant message comments

Commit:352aef4
Author:Jeff Donahue

Merge pull request #2466 from ducha-aiki/mvn-less Remove unnecessary variance computation from backward in MVN layer

Commit:e8d93cb
Author:Jeff Donahue

Merge pull request #2095 from mtamburrano/skip_propagate_down_param Added param skip_propagate_down to LayerParameter

Commit:b866d14
Author:Dmytro Mishkin

Remove unnecessary variance computation from backward in MVN layer

Commit:c7c4c64
Author:manuele

Added "propagate_down" param to LayerParameter

Commit:4fb3c9e
Author:Simon Safar
Committer:Jeff Donahue

Added a Reshape layer for copying-free modification of blob dimensions.

Commit:fa6169e
Author:Jeff Donahue
Committer:Jeff Donahue

ReshapeLayer fixups for ND blobs

Commit:21032b2
Author:Jeff Donahue
Committer:Jeff Donahue

Add ReshapeParameter axis and num_axes to reshape only a particular span of the input shape

Commit:35a5df5
Author:Jeff Donahue

Merge pull request #2177 from pgao/spp_layer Spatial Pyramid Pooling Layer

Commit:438cf0e
Author:PETER_GAO
Committer:PETER_GAO

Spatial Pyramid Pooling Layer

Commit:ca673fd
Author:Nick Carlevaris-Bianco

Added support for original implementation, using (margin - d^2), through the legacy_version parameter.

Commit:b963008
Author:Sergio Guadarrama
Committer:Sergio Guadarrama

Allow Transform of encoded datum. Allow initialize transformed_blob from datum or transform params. Allow force_color and force_gray as transform params.

Commit:6fe2b04
Author:Jeff Donahue
Committer:Jeff Donahue

HDF5DataLayer shuffle: minor cleanup; clarification in HDF5DataParameter

Commit:249aba4
Author:wieschol
Committer:Jeff Donahue

shuffle data

Commit:bb5bf43
Author:Takuya Narihira
Committer:Takuya Narihira

PReLU Layer and its tests described in Kaiming He et al, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification", arxiv 2015. Belows are commit message histories that I had while developing. PReLULayer takes FillerParameter for init PReLU testing consistency with ReLU Fix : PReLU test concistency check PReLU tests in-place computation, and it failed in GPU Fix: PReLU in-place backward in GPU PReLULayer called an incorrect API for copying data (caffe_gpu_memcpy). First argment of `caffe_gpu_memcpy` should be size of memory region in byte. I modified to use `caffe_copy` function. Fix: style errors Fix: number of axes of input blob must be >= 2 Use 1D blob, zero-D blob. Rename: hw -> dim

Commit:6ea7a66
Author:max argus
Committer:Jeff Donahue

AccuracyLayer: add ignore_label param

Commit:7a40f74
Author:Jeff Donahue
Committer:Jeff Donahue

Fixup AccuracyLayer like SoftmaxLossLayer in #1970 -- fixes #2063

Commit:7462c84
Author:Jeff Donahue
Committer:Jeff Donahue

DummyDataLayer outputs blobs of arbitrary shape

Commit:b868916
Author:Jeff Donahue
Committer:Jeff Donahue

SliceLayer: generalized Blob axes

Commit:abec302
Author:Jeff Donahue
Committer:Jeff Donahue

SoftmaxLayer: generalized Blob axes

Commit:8afdcd0
Author:Jeff Donahue
Committer:Jeff Donahue

ConcatLayer: generalized Blob axes

Commit:29581e6
Author:Jeff Donahue
Committer:Jeff Donahue

InnerProductLayer can multiply along any axis

Commit:1434e87
Author:Jeff Donahue
Committer:Jeff Donahue

Blobs are ND arrays (for N not necessarily equals 4). vector<int> shape_ instead of (num, channels, height, width).

Commit:5407f82
Author:Jeff Donahue
Committer:Jeff Donahue

Add BlobShape message; use for Net input shapes

Commit:9114424
Author:Evan Shelhamer

Merge pull request #1910 from philkr/encoded add force_encoded_color flag to the data layer and warn about mixed encoding

Commit:a2f7f47
Author:philkr
Committer:philkr

added a force_encoded_color flag to the data layer. Printing a warning if images of different channel dimensions are encoded together

Commit:6eb0931
Author:Evan Shelhamer
Committer:Evan Shelhamer

give phase to Net and Layer Give the responsibility for phase to Net and Layer, making phase an immutable choice at instantiation and dropping it from the Caffe singleton.

Commit:d94f107
Author:Jonathan L Long
Committer:Jonathan L Long

[pycaffe] allow Layer to be extended from Python This is done by adding PythonLayer as a boost::python HeldType.

Commit:f38ddef
Author:Jeff Donahue
Committer:Jeff Donahue

Add gradient clipping -- limit L2 norm of parameter gradients

Commit:6a22697
Author:Jeff Donahue
Committer:Jeff Donahue

fix for layer-type-str: loss_param and DECONVOLUTION type should have been included in V1LayerParameter, get upgraded

Commit:11a4c16
Author:Jeff Donahue
Committer:Jeff Donahue

start layer parameter field IDs at 100 (always want them printed at the end, and want to allow more fields to be added in the future, so reserve fields 10-99 for that purpose)

Commit:2e6a82c
Author:Jeff Donahue
Committer:Jeff Donahue

automagic upgrade for v1->v2

Commit:af37eac
Author:Jeff Donahue
Committer:Jeff Donahue

'layers' -> 'layer'

Commit:bb5ba1b
Author:Jeff Donahue
Committer:Jeff Donahue

restore upgrade_proto

Commit:78b02e5
Author:Jeff Donahue
Committer:Jeff Donahue

add message ParamSpec to replace param name, blobs_lr, weight_decay, ...

Commit:62d1d3a
Author:Jeff Donahue
Committer:Jeff Donahue

get rid of NetParameterPrettyPrint as layer is now after inputs (whoohoo)

Commit:3b13846
Author:Jeff Donahue
Committer:Jeff Donahue

Layer type is a string

Commit:9767b99
Author:Evan Shelhamer

Merge pull request #1615 from longjon/deconv-layer Add deconvolution layer with refactoring of convolution layer to share code

Commit:cff3007
Author:Evan Shelhamer

Merge pull request #1654 from longjon/softmax-missing-values Add missing value support to SoftmaxLossLayer

Commit:3519d05
Author:Jeff Donahue
Committer:Jeff Donahue

debug_info in NetParameter so it can be enabled outside training

Commit:1304173
Author:Jeff Donahue
Committer:Jeff Donahue

Make comments for sparse GaussianFiller match actual behavior (Fixes #1497 reported by @denizyuret)

Commit:3617352
Author:Jonathan L Long
Committer:Jonathan L Long

add DeconvolutionLayer, using BaseConvolutionLayer

Commit:34321e4
Author:Jonathan L Long
Committer:Jonathan L Long

add spatial normalization option to SoftmaxLossLayer With missing values (and batches of varying spatial dimension), normalizing each batch across instances can inappropriately give different instances different weights, so we give the option of simply normalizing by the batch size instead.

Commit:5843b52
Author:Jonathan L Long
Committer:Jonathan L Long

add missing value support to SoftmaxLossLayer

Commit:18749f8
Author:Sergio
Committer:Sergio

Added Multistep, Poly and Sigmoid learning rate decay policies Conflicts: include/caffe/solver.hpp src/caffe/proto/caffe.proto src/caffe/solver.cpp

Commit:9e756bf
Author:qipeng
Committer:Sergio

Display averaged loss over the last several iterations

Commit:bdd0a00
Author:Sergio Guadarrama

Merge pull request #190 from sguada/new_lr_policies New lr policies, MultiStep and StepEarly

Commit:e9d6e5a
Author:Sergio
Committer:Sergio

Add root_folder to ImageDataLayer

Commit:14f548d
Author:Sergio
Committer:Sergio

Added cache_images to WindowDataLayer Added root_folder to WindowDataLayer to locate images

Commit:9fc7f36
Author:Sergio
Committer:Sergio

Added encoded datum to io

Commit:6ad4f95
Author:Kevin James Matzen
Committer:Kevin James Matzen

Refactored leveldb and lmdb code.

Commit:b025da7
Author:Sergio
Committer:Sergio

Added Multistep, Poly and Sigmoid learning rate decay policies Conflicts: include/caffe/solver.hpp src/caffe/proto/caffe.proto src/caffe/solver.cpp

Commit:914da95
Author:Jonathan L Long
Committer:Jonathan L Long

correct naming in comment and message about average_loss

Commit:0ba046b
Author:Sergio Guadarrama

Merge pull request #1070 from sguada/move_data_mean Refactor data_transform to allow datum, cv:Mat and Blob transformation

Commit:a9572b1
Author:Sergio
Committer:Sergio

Added mean_value to specify mean channel substraction Added example of use to models/bvlc_reference_caffenet/train_val_mean_value.prototxt

Commit:760ffaa
Author:Sergio
Committer:Sergio

Added global_pooling to set the kernel size equal to the bottom size Added check for padding and stride with global_pooling

Commit:4602439
Author:Sergio
Committer:Sergio

Initial cv::Mat transformation Added cv::Mat transformation to ImageDataLayer Conflicts: src/caffe/layers/image_data_layer.cpp Added transform Datum to Blob Conflicts: src/caffe/layers/base_data_layer.cpp src/caffe/layers/base_data_layer.cu Added transform cv::Mat to Blob Added transform Vector<Datum> to Blob Conflicts: src/caffe/data_transformer.cpp

Commit:7995a38
Author:Jeff Donahue
Committer:Jeff Donahue

Add ExpLayer to calculate y = base ^ (scale * x + shift)

Commit:e6ba910
Author:Jeff Donahue

caffe.proto: do some minor cleanup (fix comments, alphabetization)

Commit:c76ba28
Author:Jeff Donahue

Merge pull request #1096 from qipeng/smoothed-cost Display averaged loss over the last several iterations

Commit:502141d
Author:Karen Simonyan
Committer:Karen Simonyan

adds a parameter to the LRN layer (denoted as "k" in [Krizhevsky et al., NIPS 2012])

Commit:aeb0e98
Author:Karen Simonyan
Committer:Karen Simonyan

added support for "k" LRN parameter to upgrade_proto

Commit:7c3c089
Author:Evan Shelhamer

Merge pull request #959 from nickcarlevaris/contrastive_loss Add contrastive loss layer, tests, and a siamese network example

Commit:03e0e01
Author:qipeng

Display averaged loss over the last several iterations

Commit:e294f6a
Author:Jonathan L Long

fix spelling error in caffe.proto

Commit:d54846c
Author:Jonathan L Long

fix out-of-date next ID comment for SolverParameter

Commit:d149c9a
Author:Nick Carlevaris-Bianco
Committer:Nick Carlevaris-Bianco

Added contrastive loss layer, associated tests, and a siamese network example using shared weights and the contrastive loss.

Commit:761c815
Author:to3i
Committer:Jeff Donahue

Implemented elementwise max layer

Commit:77d9124
Author:Evan Shelhamer
Committer:Evan Shelhamer

add cuDNN to build

Commit:a3dcca2
Author:Evan Shelhamer
Committer:Evan Shelhamer

add engine parameter for multiple computational strategies add `engine` switch to layers for selecting a computational backend when there is a choice. Currently the standard Caffe implementation is the only backend.

Commit:cd52392
Author:Evan Shelhamer
Committer:Evan Shelhamer

groom proto: sort layer type parameters, put loss_weight after basics

Commit:50d9d0d
Author:Evan Shelhamer

Merge pull request #1036 from longjon/test-initialization-param Add test_initialization option to allow skipping initial test

Commit:d8f56fb
Author:Jeff Donahue
Committer:Jonathan L Long

add SILENCE layer -- takes one or more inputs and produces no output This is useful for suppressing undesired outputs.

Commit:2bdf516
Author:Jonathan L Long
Committer:Jonathan L Long

add test_initialization option to allow skipping initial test

Commit:4c35ad2
Author:Kai Li
Committer:Kai Li

Add transformer to the memory data layer

Commit:3c9a13c
Author:Kai Li
Committer:Kai Li

Move transform param one level up in the proto to reduce redundancy

Commit:dbb9296
Author:Jeff Donahue
Committer:Jeff Donahue

cleanup caffe.proto

Commit:a683c40
Author:qipeng
Committer:Jeff Donahue

Added L1 regularization support for the weights

Commit:b0ec531
Author:qipeng
Committer:Jeff Donahue

fixed caffe.proto after a mistaken rebase

Commit:23d4430
Author:qipeng
Committer:Jeff Donahue

fixes after rebase

Commit:29b3b24
Author:qipeng
Committer:Jeff Donahue

proto conflit, lint, and math_functions (compiler complaint)

Commit:910db97
Author:Jeff Donahue
Committer:Jeff Donahue

Add "stable_prod_grad" option (on by default) to ELTWISE layer to compute the eltwise product gradient using a slower but stabler formula.

Commit:3141e71
Author:Evan Shelhamer
Committer:Evan Shelhamer

restore old data transformation parameters for compatibility

Commit:a446097
Author:TANGUY Arnaud

Refactor ImageDataLayer to use DataTransformer

Commit:f6ffd8e
Author:TANGUY Arnaud
Committer:TANGUY Arnaud

Refactor DataLayer using a new DataTransformer Start the refactoring of the datalayers to avoid data transformation code duplication. So far, only DataLayer has been done.

Commit:ececfc0
Author:Adam Kosiorek
Committer:Jeff Donahue

cmake build system