Proto commits in happynear/caffe-windows

These commits are when the Protocol Buffers files have changed: (only the last 100 relevant commits are shown)

Commit:474eb44
Author:happynear

anneal add

Commit:0f127f2
Author:happynear

margin determined by radial removal.

Commit:2662861
Author:happynear

label specific add annealing

The documentation is generated from this commit.

Commit:364a43d
Author:happynear

label specific add

Commit:2e9ade3
Author:happynear

layers for AM-Softmax

Commit:73e61dc
Author:happynear

softmax margin

Commit:c7e8ebe
Author:happynear

auto-tune

Commit:c4c0acc
Author:happynear

pass bp

Commit:34a8d8b
Author:happynear

scale for angle

Commit:f1c016e
Author:happynear

add some experimental layers

Commit:b04d284
Author:happynear

feature decay loss

Commit:efa9825
Author:happynear

label smoothing

Commit:cbccf18
Author:happynear

weighted mean

Commit:ba9558c
Author:happynear

power

Commit:a506faa
Author:happynear

label specific affine

Commit:3936bb0
Author:happynear

split forward and backward in channel scale layer.

Commit:4d176ed
Author:happynear

random erasing

Commit:35132c0
Author:happynear

soft margin

Commit:69b2b88
Author:happynear

auto_tune

Commit:eff73fe
Author:happynear

label specific margin

Commit:b6732d0
Author:happynear

normalize layer fix gradient(don't use this parameter).

Commit:2ee617d
Author:happynear

x / norm(x) * norm(x)

Commit:bef2ce5
Author:happynear

focal loss for softmax

Commit:e9cfdf7
Author:happynear

label specific power

Commit:b43f45a
Author:happynear

focal loss from https://github.com/sciencefans/Focal-Loss

Commit:fedea42
Author:happynear

label smoothing

Commit:96b7a46
Author:happynear

change rescale strategy

Commit:31199e2
Author:happynear

feature incay add force incay.

Commit:86eaadb
Author:happynear

base should be 0

Commit:fc1c285
Author:happynear

add scaling lambda to label specified rescale layer

Commit:cee7837
Author:happynear

label specific rescale compatible for innerproduct.

Commit:9cbf265
Author:happynear

SphereFace

Commit:3939a1f
Author:Feng Wang
Committer:GitHub

Revert "merge ms to hog"

Commit:7612850
Author:happynear

resize layer

Commit:4872a48
Author:happynear

Merge branch 'master' of https://github.com/bvlc/caffe into ms # Conflicts: # python/caffe/_caffe.cpp # src/caffe/layers/batch_norm_layer.cpp # src/caffe/layers/batch_norm_layer.cu # src/caffe/layers/cudnn_conv_layer.cpp # src/caffe/layers/cudnn_relu_layer.cpp # src/caffe/proto/caffe.proto

Commit:cca872c
Author:happynear

ordinal regression

Commit:d18b2ee
Author:happynear

bn layer update

Commit:f71bcf0
Author:happynear

bn layer

Commit:6b62263
Author:happynear

mae for contrastive loss

Commit:2cbc1bb
Author:Evan Shelhamer
Committer:GitHub

Merge pull request #3855 from shaibagon/upgrade_infogain InfogainLoss layer can normalize, ignore, and more

Commit:850ffd8
Author:Cyprien Noel

Remove missed legacy parallel code

Commit:11930f1
Author:Jonathan R. Williford

Clarify batch norm parameter documentation.

Commit:8d328d8
Author:happynear

second order operation for eltwise layer

Commit:5e4452d
Author:happynear

infimum loss layer, label specific rescale layer

Commit:2876e8c
Author:happynear

large margin softmax

Commit:3a9a7eb
Author:happynear

Merge branch 'master' of https://github.com/bvlc/caffe into ms # Conflicts: # matlab/+caffe/private/caffe_.cpp # python/caffe/_caffe.cpp

Commit:efee447
Author:happynear

make dropout's scale consistent with normalize layer.

Commit:3470a99
Author:happynear

get cudnn_batch_norm layer from nvidia-caffe

Commit:c2096a4
Author:happynear

default positive margin set to 0

Commit:3d913ce
Author:happynear

add exp weight to contrastive_loss_layer

Commit:0f0352c
Author:happynear

add min_negative param to nca

Commit:1f2babc
Author:happynear

truncation layer for regression

Commit:929135b
Author:Evan Shelhamer
Committer:GitHub

Merge pull request #5210 from ftokarev/patches Obsolete reference to `bool solver` in caffe.proto

Commit:ef0df67
Author:happynear

crop_h crop_w

Commit:94d6417
Author:happynear

clip weight

Commit:de83ef7
Author:happynear

permute_layer from SSD

Commit:240149d
Author:happynear

add cut_label parameter to multi_label_image_data for temporal training.

Commit:60fa6b3
Author:happynear

multi label image data layer add balance

Commit:d6d4eba
Author:happynear

deconv_layer add parameter shape_offset

Commit:3c8f4d3
Author:happynear

add clamp weights to solver

Commit:299eb96
Author:happynear

batch_contrastive_loss_layer

Commit:a05b451
Author:happynear

add turn point to smooth L1 layer

Commit:3ee9db9
Author:happynear

batch_norm add parameters: disable_mean, disable_variance.

Commit:4067d20
Author:happynear

add positive_first parameter to general contrastive layer.

Commit:a430a64
Author:happynear

if positive distance beyond a upper bound, optimize positive distance only.

Commit:a64246f
Author:happynear

pairwise layer.

Commit:1770adf
Author:happynear

general_triplet_layer

Commit:3a0b6c6
Author:Fyodor Tokarev
Committer:Fyodor Tokarev

Update a comment in caffe.proto

Commit:aad3bc9
Author:happynear

add filler to parameter layer.

Commit:b906f48
Author:happynear

proposal and psroi layer from RFCN

Commit:0ad0b70
Author:happynear

Merge branch 'master' of https://github.com/bvlc/caffe into ms # Conflicts: # matlab/+caffe/private/caffe_.cpp # python/caffe/_caffe.cpp # src/caffe/layers/base_data_layer.cpp # src/caffe/layers/base_data_layer.cu

Commit:451bee4
Author:happynear

hard positive rename.

Commit:4c18ad9
Author:happynear

only optimize the hardest sample.

Commit:c522c00
Author:happynear

ignore outlier.

Commit:3ba2054
Author:Cyprien Noel
Committer:Cyprien Noel

Switched multi-GPU to NCCL

Commit:7833bbf
Author:happynear

balance classes by over-sampling.

Commit:7c1b211
Author:happynear

general triplet loss(not finished)

Commit:09715fe
Author:happynear

general contrastive loss add positive/negative weights

Commit:565a187
Author:happynear

add normalization parameter to innerproduct layer.

Commit:7da4afc
Author:happynear

add min_is_better param to accuracy layer.

Commit:8bcb7c7
Author:happynear

refact normalize

Commit:ecf6bd5
Author:happynear

write gpu_kernels for normalize layer.

Commit:8bdd91e
Author:happynear

Merge branch 'ms' of https://github.com/happynear/caffe-windows into ms

Commit:838e875
Author:happynear

initial inner_distance_layer

Commit:db66432
Author:Zhou Mo

fix many typos by using codespell

Commit:3d62e3c
Author:Evan Shelhamer
Committer:Evan Shelhamer

sigmoid cross-entropy loss: normalize loss by different schemes sig-ce loss handles all the same normalizations as the softmax loss; refer to #3296 for more detail. this preserves the default normalization for sig-ce loss: batch size.

Commit:2214433
Author:happynear

add L1 distance to center loss

Commit:434b0fa
Author:happynear

flip layer

Commit:b62bb5e
Author:happynear

Merge remote-tracking branch 'bvlc/master' into ms

Commit:ed8c125
Author:happynear

fix transpose param in memory data layer, trying multi-thread predicting.

Commit:01fec4b
Author:happynear

center loss

Commit:f021dec
Author:happynear

cascade cnn

Commit:3a11dc3
Author:happynear

some fix on fcn data & accuracy layer

Commit:4eefc27
Author:happynear

fcn data change, fcn accuracy, softmax cutting point.

Commit:e21147d
Author:Jeff Donahue
Committer:haoran

Add RecurrentLayer: an abstract superclass for other recurrent layer types

Commit:bdb9457
Author:Alican Bozkurt

add default value for rms_decay

Commit:2d29e50
Author:happynear

Merge branch 'master' of https://github.com/BVLC/caffe into ms Conflicts: scripts/travis/travis_install.sh src/caffe/layers/memory_data_layer.cpp

Commit:5f2d845
Author:Jeff Donahue
Committer:Jeff Donahue

Add RecurrentLayer: an abstract superclass for other recurrent layer types

Commit:d6ca8e7
Author:happynear

matcaffe output folder, def pool, io size fix for VGG model.

Commit:679bb5d
Author:Sasa Galic
Committer:Sasa Galic

Merge bvlc/windows@{2016-05-10} into master