These commits are when the Protocol Buffers files have changed: (only the last 100 relevant commits are shown)
Commit: | 6f4eda1 | |
---|---|---|
Author: | Hyeonwoo Noh |
seg_ctrl_cls layer is implemented
Commit: | 83fdd2a | |
---|---|---|
Author: | Hyeonwoo Noh |
seg binary ctrl layer is implemented
Commit: | 2c8eb0c | |
---|---|---|
Author: | HyeonwooNoh |
caffe proto is added
The documentation is generated from this commit.
Commit: | f666592 | |
---|---|---|
Author: | HyeonwooNoh |
tile unpooling is added
Commit: | 9ef9e8e | |
---|---|---|
Author: | HyeonwooNoh |
average unpooling is added
Commit: | b2beea0 | |
---|---|---|
Author: | HyeonwooNoh |
cls_label_base param is added
Commit: | 8ce2261 | |
---|---|---|
Author: | HyeonwooNoh |
SELECT_SEG_BINARY caffe.proto is added
Commit: | 5e2fc82 | |
---|---|---|
Author: | hyeonwoonoh |
SEG_BINARY_LAYER is added
Commit: | 7573ac3 | |
---|---|---|
Author: | HyeonwooNoh |
window cls data parameter proto is added
Commit: | 0b9f3ad | |
---|---|---|
Author: | HyeonwooNoh |
WINDOW_CLS_DATA proto is added
Commit: | e211e00 | |
---|---|---|
Author: | Hyeonwoo Noh |
binary accuracy option is added to proto
Commit: | 12e58ce | |
---|---|---|
Author: | HyeonwooNoh |
BIN_ACCURACY proto is added
Commit: | db6a4f8 | |
---|---|---|
Author: | HyeonwooNoh |
window_inst_seg is implemented
Commit: | 8090694 | |
---|---|---|
Author: | HyeonwooNoh |
instance seg data proto is added
Commit: | da53fca | |
---|---|---|
Author: | HyeonwooNoh |
window_seg_data is added
Commit: | 50fd103 | |
---|---|---|
Author: | HyeonwooNoh |
proto for IMAGE_SEG_DATA is added
Commit: | 822d544 | |
---|---|---|
Author: | HyeonwooNoh |
internal softmax is added to red accuracy layer
Commit: | eeffda0 | |
---|---|---|
Author: | HyeonwooNoh |
red accuracy layer proto define
Commit: | c9e8619 | |
---|---|---|
Author: | HyeonwooNoh |
proto error is fixed
Commit: | 603d4d1 | |
---|---|---|
Author: | HyeonwooNoh |
red softmax loss param is added
Commit: | 4119f43 | |
---|---|---|
Author: | HyeonwooNoh |
RED_SOFTMAX_LOSS proto is defined
Commit: | 46afdb5 | |
---|---|---|
Author: | HyeonwooNoh |
BNMode is added[LEARN/INFERENCE]
Commit: | 242f023 | |
---|---|---|
Author: | HyeonwooNoh |
proto file for BN layer is added
Commit: | 83bb13a | |
---|---|---|
Author: | HyeonwooNoh |
unpooling registry in layer_factory.cpp
Commit: | 738eed1 | |
---|---|---|
Author: | HyeonwooNoh |
unpooing layer definition is edited
Commit: | 43cfe1d | |
---|---|---|
Author: | HyeonwooNoh |
unpooling proto definition added
Commit: | c974e8d | |
---|---|---|
Author: | hyeonwoonoh |
implement ignore label in eltwise_accracy_layer
Commit: | d420b6b | |
---|---|---|
Author: | hyeonwoonoh |
eltwise_accuracy proto is added
Commit: | 3f27be2 | |
---|---|---|
Author: | Jonathan L Long |
Merge pull request #1663 from longjon/accum-grad Decouple the computational batch size and minibatch size by accumulating gradients
Commit: | 0ed9883 | |
---|---|---|
Author: | Jonathan L Long |
Merge pull request #1654 from longjon/softmax-missing-values Add missing value support to SoftmaxLossLayer
Commit: | 66712cd | |
---|---|---|
Author: | Jonathan L Long | |
Committer: | Jonathan L Long |
zero-init param diffs and accumulate gradients (With layers whose backwards accumlate gradients), this effectively decouples the computational batch from the SGD minibatch. Each iteration accumulates gradients over iter_size batches, then parameters are updated.
Commit: | 36ebe60 | |
---|---|---|
Author: | Jonathan L Long | |
Committer: | Jonathan L Long |
add CropLayer for cropping one blob to another using induced coordinates
Commit: | 816c6db | |
---|---|---|
Author: | Jonathan L Long | |
Committer: | Jonathan L Long |
add DeconvolutionLayer, using BaseConvolutionLayer
Commit: | 34321e4 | |
---|---|---|
Author: | Jonathan L Long | |
Committer: | Jonathan L Long |
add spatial normalization option to SoftmaxLossLayer With missing values (and batches of varying spatial dimension), normalizing each batch across instances can inappropriately give different instances different weights, so we give the option of simply normalizing by the batch size instead.
Commit: | 5843b52 | |
---|---|---|
Author: | Jonathan L Long | |
Committer: | Jonathan L Long |
add missing value support to SoftmaxLossLayer
Commit: | bdd0a00 | |
---|---|---|
Author: | Sergio Guadarrama |
Merge pull request #190 from sguada/new_lr_policies New lr policies, MultiStep and StepEarly
Commit: | 14f548d | |
---|---|---|
Author: | Sergio | |
Committer: | Sergio |
Added cache_images to WindowDataLayer Added root_folder to WindowDataLayer to locate images
Commit: | e9d6e5a | |
---|---|---|
Author: | Sergio | |
Committer: | Sergio |
Add root_folder to ImageDataLayer
Commit: | 9fc7f36 | |
---|---|---|
Author: | Sergio | |
Committer: | Sergio |
Added encoded datum to io
Commit: | 6ad4f95 | |
---|---|---|
Author: | Kevin James Matzen | |
Committer: | Kevin James Matzen |
Refactored leveldb and lmdb code.
Commit: | b025da7 | |
---|---|---|
Author: | Sergio | |
Committer: | Sergio |
Added Multistep, Poly and Sigmoid learning rate decay policies Conflicts: include/caffe/solver.hpp src/caffe/proto/caffe.proto src/caffe/solver.cpp
Commit: | 914da95 | |
---|---|---|
Author: | Jonathan L Long | |
Committer: | Jonathan L Long |
correct naming in comment and message about average_loss
Commit: | 0ba046b | |
---|---|---|
Author: | Sergio Guadarrama |
Merge pull request #1070 from sguada/move_data_mean Refactor data_transform to allow datum, cv:Mat and Blob transformation
Commit: | a9572b1 | |
---|---|---|
Author: | Sergio | |
Committer: | Sergio |
Added mean_value to specify mean channel substraction Added example of use to models/bvlc_reference_caffenet/train_val_mean_value.prototxt
Commit: | 760ffaa | |
---|---|---|
Author: | Sergio | |
Committer: | Sergio |
Added global_pooling to set the kernel size equal to the bottom size Added check for padding and stride with global_pooling
Commit: | 4602439 | |
---|---|---|
Author: | Sergio | |
Committer: | Sergio |
Initial cv::Mat transformation Added cv::Mat transformation to ImageDataLayer Conflicts: src/caffe/layers/image_data_layer.cpp Added transform Datum to Blob Conflicts: src/caffe/layers/base_data_layer.cpp src/caffe/layers/base_data_layer.cu Added transform cv::Mat to Blob Added transform Vector<Datum> to Blob Conflicts: src/caffe/data_transformer.cpp
Commit: | 7995a38 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Add ExpLayer to calculate y = base ^ (scale * x + shift)
Commit: | e6ba910 | |
---|---|---|
Author: | Jeff Donahue |
caffe.proto: do some minor cleanup (fix comments, alphabetization)
Commit: | c76ba28 | |
---|---|---|
Author: | Jeff Donahue |
Merge pull request #1096 from qipeng/smoothed-cost Display averaged loss over the last several iterations
Commit: | aeb0e98 | |
---|---|---|
Author: | Karen Simonyan | |
Committer: | Karen Simonyan |
added support for "k" LRN parameter to upgrade_proto
Commit: | 502141d | |
---|---|---|
Author: | Karen Simonyan | |
Committer: | Karen Simonyan |
adds a parameter to the LRN layer (denoted as "k" in [Krizhevsky et al., NIPS 2012])
Commit: | 7c3c089 | |
---|---|---|
Author: | Evan Shelhamer |
Merge pull request #959 from nickcarlevaris/contrastive_loss Add contrastive loss layer, tests, and a siamese network example
Commit: | 03e0e01 | |
---|---|---|
Author: | qipeng |
Display averaged loss over the last several iterations
Commit: | e294f6a | |
---|---|---|
Author: | Jonathan L Long |
fix spelling error in caffe.proto
Commit: | d54846c | |
---|---|---|
Author: | Jonathan L Long |
fix out-of-date next ID comment for SolverParameter
Commit: | d149c9a | |
---|---|---|
Author: | Nick Carlevaris-Bianco | |
Committer: | Nick Carlevaris-Bianco |
Added contrastive loss layer, associated tests, and a siamese network example using shared weights and the contrastive loss.
Commit: | 761c815 | |
---|---|---|
Author: | to3i | |
Committer: | Jeff Donahue |
Implemented elementwise max layer
Commit: | 77d9124 | |
---|---|---|
Author: | Evan Shelhamer | |
Committer: | Evan Shelhamer |
add cuDNN to build
Commit: | cd52392 | |
---|---|---|
Author: | Evan Shelhamer | |
Committer: | Evan Shelhamer |
groom proto: sort layer type parameters, put loss_weight after basics
Commit: | a3dcca2 | |
---|---|---|
Author: | Evan Shelhamer | |
Committer: | Evan Shelhamer |
add engine parameter for multiple computational strategies add `engine` switch to layers for selecting a computational backend when there is a choice. Currently the standard Caffe implementation is the only backend.
Commit: | 50d9d0d | |
---|---|---|
Author: | Evan Shelhamer |
Merge pull request #1036 from longjon/test-initialization-param Add test_initialization option to allow skipping initial test
Commit: | d8f56fb | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jonathan L Long |
add SILENCE layer -- takes one or more inputs and produces no output This is useful for suppressing undesired outputs.
Commit: | 2bdf516 | |
---|---|---|
Author: | Jonathan L Long | |
Committer: | Jonathan L Long |
add test_initialization option to allow skipping initial test
Commit: | 3c9a13c | |
---|---|---|
Author: | Kai Li | |
Committer: | Kai Li |
Move transform param one level up in the proto to reduce redundancy
Commit: | 4c35ad2 | |
---|---|---|
Author: | Kai Li | |
Committer: | Kai Li |
Add transformer to the memory data layer
Commit: | dbb9296 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
cleanup caffe.proto
Commit: | 29b3b24 | |
---|---|---|
Author: | qipeng | |
Committer: | Jeff Donahue |
proto conflit, lint, and math_functions (compiler complaint)
Commit: | a683c40 | |
---|---|---|
Author: | qipeng | |
Committer: | Jeff Donahue |
Added L1 regularization support for the weights
Commit: | b0ec531 | |
---|---|---|
Author: | qipeng | |
Committer: | Jeff Donahue |
fixed caffe.proto after a mistaken rebase
Commit: | 23d4430 | |
---|---|---|
Author: | qipeng | |
Committer: | Jeff Donahue |
fixes after rebase
Commit: | 910db97 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Add "stable_prod_grad" option (on by default) to ELTWISE layer to compute the eltwise product gradient using a slower but stabler formula.
Commit: | 3141e71 | |
---|---|---|
Author: | Evan Shelhamer | |
Committer: | Evan Shelhamer |
restore old data transformation parameters for compatibility
Commit: | a446097 | |
---|---|---|
Author: | TANGUY Arnaud |
Refactor ImageDataLayer to use DataTransformer
Commit: | f6ffd8e | |
---|---|---|
Author: | TANGUY Arnaud | |
Committer: | TANGUY Arnaud |
Refactor DataLayer using a new DataTransformer Start the refactoring of the datalayers to avoid data transformation code duplication. So far, only DataLayer has been done.
Commit: | ececfc0 | |
---|---|---|
Author: | Adam Kosiorek | |
Committer: | Jeff Donahue |
cmake build system
Commit: | c6e9c59 | |
---|---|---|
Author: | Jeff Donahue |
Add "not_stage" to NetStateRule to exclude NetStates with certain stages.
Commit: | 1991826 | |
---|---|---|
Author: | Alireza Shafaei | |
Committer: | Alireza Shafaei |
Added absolute value layer, useful for implementation of siamese networks! This commit also replaces the default caffe_fabs with MKL/non-MKL implementation of Abs.
Commit: | d0cae53 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Add loss_weight to proto, specifying coefficients for each top blob in the objective function.
Commit: | 9e903ef | |
---|---|---|
Author: | qipeng |
added cross-channel MVN, Mean-only normalization, added to layer factory, moved to common_layers
Commit: | b04aa00 | |
---|---|---|
Author: | qipeng | |
Committer: | qipeng |
mean-variance normalization layer
Commit: | b97b88f | |
---|---|---|
Author: | Evan Shelhamer | |
Committer: | Evan Shelhamer |
LICENSE governs the whole project so strip file headers
Commit: | 36fd64c | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Add 'snapshot_after_train' to SolverParameter to override the final snapshot.
Commit: | c2b74c3 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Add NetState message with phase, level, stage; NetStateRule message with filtering rules for Layers.
Commit: | edf438a | |
---|---|---|
Author: | Evan Shelhamer | |
Committer: | Evan Shelhamer |
add h/w kernel size, stride, and pad for non-square filtering while keeping everything working as-is.
Commit: | 149a176 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Print blob L1 norms during forward/backward passes and updates if new "debug_info" field in SolverParameter is set.
Commit: | 5db5b31 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
SliceLayer: post-rebase fixes, cleanup, etc. (some from changes suggested by @sguada). Test for both num & channels in forward & backward; use GaussianFiller so that tests are non-trivial.
Commit: | 324973a | |
---|---|---|
Author: | bhack | |
Committer: | Jeff Donahue |
Add split dim layer Differentiate top test blob vector size Rename to SplitLayer Add slicing points
Commit: | 0193012 | |
---|---|---|
Author: | qipeng | |
Committer: | qipeng |
leaky relu + unit test
Commit: | 7722514 | |
---|---|---|
Author: | Kai Li | |
Committer: | Kai Li |
Extend the ArgMaxLayer to output top k results
Commit: | fa6397e | |
---|---|---|
Author: | Yangqing Jia |
cosmetics: add syntax = proto2
Commit: | f74979e | |
---|---|---|
Author: | Ronghang Hu |
add tests for rectangular pooling regions
Commit: | 4e5ef95 | |
---|---|---|
Author: | Ronghang Hu |
Update caffe.proto Add pad_h, pad_w, kernel_size_h, kernel_size_w, stride_h, stride_w to support pooling on rectangle regions.
Commit: | 4a57e72 | |
---|---|---|
Author: | Rob Hess | |
Committer: | Rob Hess |
Update name of last added param.
Commit: | cca6500 | |
---|---|---|
Author: | cypof | |
Committer: | Rob Hess |
Next LayerParameter proto id
Commit: | 1c640c9 | |
---|---|---|
Author: | Rob Hess | |
Committer: | Rob Hess |
Incorporate top_k param into AccuracyLayer and check it's value.
Commit: | 5890a35 | |
---|---|---|
Author: | Rob Hess | |
Committer: | Rob Hess |
Add parameter for AccuracyLayer in proto.
Commit: | 26e022a | |
---|---|---|
Author: | Evan Shelhamer |
change weight blob field name to param
Commit: | 41685ac | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Evan Shelhamer |
weight sharing
Commit: | 909fb39 | |
---|---|---|
Author: | Sergio |
Remove C_ mentions, extra spaces and change hinge_norm to norm
Commit: | f25687e | |
---|---|---|
Author: | Sergio |
Removed L2HingeLoss class now a case within HingeLoss class Conflicts: include/caffe/vision_layers.hpp src/caffe/layers/loss_layer.cpp src/caffe/proto/caffe.proto src/caffe/test/test_l2_hinge_loss_layer.cpp