Proto commits in MVIG-SJTU/RMPE

These commits are when the Protocol Buffers files have changed: (only the last 100 relevant commits are shown)

Commit:a6001f0
Author:Fang-Haoshu

add shuffle option for data_heatmap layer

The documentation is generated from this commit.

Commit:a731a8c
Author:Fang-Haoshu

merge file filler @daerduoCarey

Commit:dbf0b64
Author:liuxhy237

update stn, add de_transform

Commit:8616f16
Author:Fang-Haoshu

fix bug in prediction, add support for bbox scale

Commit:6743c0c
Author:liuxhy237

update files

Commit:eee56c8
Author:liuxhy237

add prediction heatmap layer

Commit:7a9b6ab
Author:liuxhy237

add detection heatmap layer

Commit:3476ff4
Author:liuxhy237

final model, add UpsampleNearest filler, merge Eltwise Affine layer@ducha-aiki

Commit:0d449e8
Author:liuxhy237

SHG runable

Commit:c8f1fbc
Author:liuxhy237

add flip switch to heatmao_data_param

Commit:e57b2ad
Author:liuxhy237

fix bug, add HG, pass simple test

Commit:0963339
Author:liuxhy237

merge caffe-heatmap@tpfister

Commit:b351807
Author:Fred Fang
Committer:GitHub

Merge pull request #1 from weiliu89/ssd Merge ssd

Commit:1427713
Author:Wei Liu

add demo for processing video file

Commit:89eace1
Author:Wei Liu

merge master and fix conflict

Commit:bdb9457
Author:Alican Bozkurt

add default value for rms_decay

Commit:5f2d845
Author:Jeff Donahue
Committer:Jeff Donahue

Add RecurrentLayer: an abstract superclass for other recurrent layer types

Commit:c419f85
Author:Jonathan L Long
Committer:Jonathan L Long

add parameter layer for learning any bottom

Commit:859cf6e
Author:Kun Wang

Fix an error in the example of ReshapeParameter. * this small mistake may confuse newer.

Commit:f87f9ac
Author:Wei Liu

rebase master

Commit:77cde9c
Author:Jeff Donahue
Committer:Jeff Donahue

Net: setting `propagate_down: true` forces backprop

Commit:251ed5a
Author:Wei Liu

add webcam demo

Commit:3dd74cd
Author:Wei Liu

speed up nms and generate output for COCO

Commit:519a320
Author:Wei Liu

add map_object_to_agnostic to enable learning object proposal

Commit:952fd17
Author:max argus
Committer:max argus

Extend Crop to N-D, changed CropParameter.

Commit:ca9fa49
Author:max argus
Committer:max argus

Crop: fixes, tests and negative axis indexing.

Commit:64e78bd
Author:Jonathan L Long
Committer:max argus

add CropLayer: crop blob to another blob's dimensions with offsets configure offset(s) through proto definition.

Commit:31a9640
Author:Wei Liu

enable different choice of encoding the prior variance

Commit:d2ffca7
Author:Wei Liu

make variance repeated in PriorBoxLayer

Commit:984e302
Author:Wei Liu

merge master

Commit:bddd04b
Author:Evan Shelhamer
Committer:Evan Shelhamer

deprecate input fields and upgrade automagically

Commit:00598ca
Author:Evan Shelhamer
Committer:Evan Shelhamer

add InputLayer for Net input Create an input layer to replace oddball Net `input` fields.

Commit:06cb2f4
Author:Wei Liu

add more normalization to MultiBoxLoss

Commit:86f9e78
Author:Wei Liu

add logistic conf loss type

Commit:4e2173c
Author:Wei Liu

add keep_top_k in DetectionOutputLayer

Commit:8f847fa
Author:Youssef Kashef
Committer:Youssef Kashef

tranpose parameter added to IP layer to support tied weights in an autoencoder. Arguments to matrix multiplication function are conditioned on this parameter, no actual transposing takes place. test ip gradient computation with transpose on

Commit:a729366
Author:Wei Liu

add sampler and related functions

Commit:ba9e3f7
Author:Wei Liu

add support when evaluating on partial test data

Commit:a336f3e
Author:Wei Liu

add code_type

Commit:3b7f1a7
Author:Wei Liu

add variance and make max_size optional

Commit:c2a8dc8
Author:Wei Liu

add neg_overlap for selecting hard negatives

Commit:48ad4cc
Author:Wei Liu

do negative mining based on scores instead of overlap

Commit:3342f99
Author:Wei Liu

change type of size from int to float

Commit:3afdc26
Author:Wei Liu

add do_neg_mining in MultiBoxLossLayer

Commit:32ae638
Author:Wei Liu

add name_size_file in DetectionEvaluationLayer

Commit:70cd366
Author:Wei Liu

add SmoothL1LossLayer from Ross Girshick's Fast R-CNN

Commit:5f0643d
Author:Wei Liu

add MaxIntegral ap_version to match VOC2012/ILSVRC AP

Commit:2426d4e
Author:Wei Liu

add difficult property for bbox annotation

Commit:89380f1
Author:Wei Liu

set lr_mult to 0 instead of using fix_scale in NormalizeLayer to not learn scale parameter

Commit:a24f832
Author:Wei Liu

add num_classes in DetectionEvaluateLayer

Commit:b5419e3
Author:Wei Liu

add SaveOutputParameter in DetectionOutputLayer

Commit:900dee1
Author:Wei Liu

add NormalizeLayer from fcn branch

Commit:1ae883b
Author:Wei Liu

add change in proto for normalize option

Commit:e8415b1
Author:Wei Liu

add TestDetection

Commit:288493d
Author:Wei Liu

add DetectionEvaluateLayer with test

Commit:4427dac
Author:Wei Liu

add DetectionOutputLayer with test

Commit:8c488b6
Author:Wei Liu

add ApplyNMS and GetConfidenceScores to bbox_util

Commit:608d0aa
Author:Wei Liu

fix merge upstream conflict

Commit:a894b40
Author:Wei Liu

fix several bugs in MultiBoxLossLayer

Commit:b68695d
Author:Wei Liu

add PermuteLayer

Commit:2b762b0
Author:Wei Liu

fix merge upstream conflict

Commit:0816907
Author:Jeff Donahue
Committer:Jeff Donahue

Separation and generalization of ChannelwiseAffineLayer into BiasLayer and ScaleLayer. The behavior of ChannelwiseAffineLayer can be reproduced by a ScaleLayer with `scale_param { bias_term: true }`. BiasLayer and ScaleLayer each take 1 or 2 bottoms, with the output having the same shape as the first. The second input -- either another bottom or a learned parameter -- will have its axes (virtually) broadcast and tiled to have the same shape as the first, after which elementwise addition (Bias) or multiplication (Scale) is performed.

Commit:ec04197
Author:Dmytro Mishkin
Committer:Jeff Donahue

Add ChannelwiseAffine for batch norm

Commit:91676b3
Author:Wei Liu

put label_map_file in AnnotatedDataParameter

Commit:a7ac8bc
Author:Evan Shelhamer

Merge pull request #3388 from mohomran/exponential_linear_units Exponential Linear Units

Commit:bc15f86
Author:Wei Liu

add MultiBoxLossLayer and bbox_util

Commit:4a0c8a1
Author:Wei Liu

add PriorBoxLayer which generates priors from a layer

Commit:016c460
Author:Wei Liu

add LabelMap and tools for create DB to store AnnotatedDatum

Commit:de1342f
Author:Wei Liu

Add AnnotatedDataLayer

Commit:3e3e9ce
Author:Jonathan L Long
Committer:Jonathan L Long

add short description of dilation to caffe.proto

Commit:93bfcb5
Author:Fisher Yu
Committer:Jonathan L Long

add support for 2D dilated convolution

Commit:a668194
Author:Mohamed Omran
Committer:Mohamed Omran

ELU layer with basic tests

Commit:8b2aa70
Author:Carl Doersch
Committer:Carl Doersch

Better normalization options for SoftmaxWithLoss layer.

Commit:39f69fb
Author:Jeff Donahue

Merge pull request #3229 from cdoersch/batchnorm2 Yet another batch normalization PR

Commit:a52ee65
Author:Carl Doersch
Committer:Carl Doersch

Cleanup batch norm layer, include global stats computation

Commit:0eea815
Author:Ronghang Hu
Committer:Ronghang Hu

Change solver type to string and provide solver registry

Commit:321720d
Author:Evan Shelhamer

Merge pull request #3160 from shelhamer/cudnnV3 Basic cuDNN v3 support

Commit:ecac7ff
Author:Simon Layton
Committer:Evan Shelhamer

Initial cuDNN v3 support

Commit:6c02c8b
Author:Tim Meinhardt
Committer:Tim Meinhardt

Add argmax_param axis

Commit:9d8206e
Author:Jeff Donahue
Committer:Jeff Donahue

Im2col and Convolution layers support N spatial axes

Commit:4c2ff16
Author:Jeff Donahue
Committer:Jeff Donahue

caffe.proto: generalize ConvolutionParameter to N spatial axes

Commit:251e67a
Author:Jeff Donahue
Committer:Jeff Donahue

Add TileLayer

Commit:80579b8
Author:Evan Shelhamer

Merge pull request #2032 from jeffdonahue/embed-layer Embed layer for lookup table of one hot encodings

Commit:4e4c89b
Author:PatWie
Committer:Ronghang Hu

Adam solver This commit implements the Adam solver by Kingma et. al for CPU and GPU. All solver parameters are defined in the caffe.proto. This also adds an example for the MNIST dataset.

Commit:bb0a90e
Author:Ronghang Hu

Merge pull request #2903 from ronghanghu/multi_gpu Multi-GPU Data Parallelism

Commit:0d34d5b
Author:Ronghang Hu
Committer:Ronghang Hu

Data Layers Parallel for Multi-GPU Allow data layers (and also PythonLayer when used as data layer) to be shared among worker solver's training net, and also test net for future-proof if one wants to do Multi-GPU testing. Data layers are locked during forward to ensure sequential forward.

Commit:1ce3380
Author:Mohamed Omran
Committer:Matthias Plappert

Implement AdaDelta; add test cases; add mnist examples

Commit:bcc8f50
Author:Cyprien Noel
Committer:Evan Shelhamer

Add DataReader for parallel training with one DB session - Make sure each solver accesses a different subset of the data - Sequential reading of DB for performance - Prefetch a configurable amount of data to host memory - Distribute data to solvers in round-robin way for determinism

Commit:abe99e8
Author:Eren Golge
Committer:Ronghang Hu

Implement RMSProp Solver Implement RMSProp solver and cleaned up to adjust to new solver interface that uses accumulated gradients and refactored regularization.

Commit:4d299c3
Author:Jeff Donahue
Committer:Jeff Donahue

Add EmbedLayer for inner products with sparse input (one-hot vectors), with unit tests

Commit:4227828
Author:Jeff Donahue
Committer:Jeff Donahue

temporarily switch the snapshot_format default back to BINARYPROTO out of anticipation for user issues due to issue #2885, which causes Caffe to crash when it attempts to snapshot nets with duplicate layer names

Commit:ada055b
Author:Eric Tzeng
Committer:Eric Tzeng

Snapshot model weights/solver state to HDF5 files. Summary of changes: - HDF5 helper functions were moved into a separate file util/hdf5.cpp - hdf5_save_nd_dataset now saves n-d blobs, can save diffs instead of data - Minor fix for memory leak in HDF5 functions (delete instead of delete[]) - Extra methods have been added to both Net/Solver enabling snapshotting and restoring from HDF5 files - snapshot_format was added to SolverParameters, with possible values HDF5 or BINARYPROTO (default HDF5) - kMaxBlobAxes was reduced to 32 to match the limitations of HDF5

Commit:f973819
Author:Jeff Donahue
Committer:Eric Tzeng

add double_data, double_diff to BlobProto for weights/snapshots saved when using Dtype == double

Commit:a756cfe
Author:Takuya Narihira
Committer:Evan Shelhamer

PythonLayer takes parameters by string

Commit:e7b2b4e
Author:philkr

ImageData layer default batch size of 1, and check for zero batch size

Commit:823d055
Author:Jeff Donahue
Committer:Jeff Donahue

Add ReductionLayer to reduce any number of "tail" axes to a scalar value Currently implements operations SUM, MEAN, ASUM (sum of absolute values), and SUMSQ (sum of squares)

Commit:eb442b9
Author:Jeff Donahue
Committer:Jeff Donahue

FlattenLayer gets a FlattenParameter with an axis, end_axis

Commit:8c72fe3
Author:Jeff Donahue
Committer:Jeff Donahue

Add LogLayer

Commit:aeef453
Author:Evan Shelhamer

Merge pull request #1977 from shelhamer/accum-grad Decouple the computational batch size and minibatch size by accumulating gradients

Commit:8b05a02
Author:Jeff Donahue

Merge pull request #2410 from sguada/datum_transform Datum transform