These commits are when the Protocol Buffers files have changed: (only the last 100 relevant commits are shown)
| Commit: | 4c7926f | |
|---|---|---|
| Author: | Chuck Cho | |
WIP to merge upstream changes as of 2018 Dec. Merge remote-tracking branch 'upstream/master' into merge-upstream-99bd997 Conflicts: python/caffe/io.py src/caffe/layer_factory.cpp src/caffe/proto/caffe.proto
The documentation is generated from this commit.
| Commit: | 828dd10 | |
|---|---|---|
| Author: | Przemysław Dolata | |
| Committer: | GitHub | |
Merge branch 'master' into patch_1
| Commit: | 7f4f5d2 | |
|---|---|---|
| Author: | Harm Berntsen | |
| Committer: | Przemysław Dolata | |
Add clip layer
| Commit: | 24b0905 | |
|---|---|---|
| Author: | Przemysław Dolata | |
| Committer: | GitHub | |
Merge pull request #6282 from Noiredd/pooling-mode PoolingLayer customizable output shape rounding mode
| Commit: | f019d0d | |
|---|---|---|
| Author: | Kuang Fangjun | |
| Committer: | Kuang Fangjun | |
fix typos and some minor fixes.
| Commit: | dabbc91 | |
|---|---|---|
| Author: | Mikhail Antonenka | |
| Committer: | Przemysław Dolata | |
Added Swish layer (#6002) * added swish layer (cpu) * swish layer: added tests * swish layer: optimized backpropogation * swish layer: added cuda implementation * swish layer: added beta parameter * swish layer: incorporated sigmoid layer * swish layer: fix comment of last added parameter * swish layer: added REGISTER_LAYER_CLASS
| Commit: | d7da092 | |
|---|---|---|
| Author: | Noiredd | |
PoolingLayer customizable output shape rounding mode
| Commit: | c326294 | |
|---|---|---|
| Author: | iovodov | |
| Committer: | iovodov | |
Weight parameter in solver is used in caffe.exe Loading weights is moved from caffe.exe to solver class, so new "weights" solver parameter is used not only from command line but when caffe is used as library (including python) corrected formatting fixed line length more formatting corrected
| Commit: | 6fa4c62 | |
|---|---|---|
| Author: | iovodov | |
| Committer: | iovodov | |
Automatic replacement of snapshot_prefix parameter if it is empty or points to a directory. See issue #6110 proposed improvement No.2
| Commit: | 363a92d | |
|---|---|---|
| Author: | Chuck Cho | |
Merge remote-tracking branch 'upstream/master' Conflicts: README.md scripts/travis/install-deps.sh src/caffe/test/test_hdf5_output_layer.cpp
| Commit: | 2cbc1bb | |
|---|---|---|
| Author: | Evan Shelhamer | |
| Committer: | GitHub | |
Merge pull request #3855 from shaibagon/upgrade_infogain InfogainLoss layer can normalize, ignore, and more
| Commit: | 850ffd8 | |
|---|---|---|
| Author: | Cyprien Noel | |
Remove missed legacy parallel code
| Commit: | 11930f1 | |
|---|---|---|
| Author: | Jonathan R. Williford | |
Clarify batch norm parameter documentation.
| Commit: | 929135b | |
|---|---|---|
| Author: | Evan Shelhamer | |
| Committer: | GitHub | |
Merge pull request #5210 from ftokarev/patches Obsolete reference to `bool solver` in caffe.proto
| Commit: | 3a0b6c6 | |
|---|---|---|
| Author: | Fyodor Tokarev | |
| Committer: | Fyodor Tokarev | |
Update a comment in caffe.proto
| Commit: | 3ba2054 | |
|---|---|---|
| Author: | Cyprien Noel | |
| Committer: | Cyprien Noel | |
Switched multi-GPU to NCCL
| Commit: | 99f6d79 | |
|---|---|---|
| Author: | Chuck Cho | |
Merge remote-tracking branch 'bvlc/master'
| Commit: | e5a04b2 | |
|---|---|---|
| Author: | Chuck Cho | |
Merge remote-tracking branch 'bvlc/master' into refactor
| Commit: | db66432 | |
|---|---|---|
| Author: | Zhou Mo | |
fix many typos by using codespell
| Commit: | 3d62e3c | |
|---|---|---|
| Author: | Evan Shelhamer | |
| Committer: | Evan Shelhamer | |
sigmoid cross-entropy loss: normalize loss by different schemes sig-ce loss handles all the same normalizations as the softmax loss; refer to #3296 for more detail. this preserves the default normalization for sig-ce loss: batch size.
| Commit: | 84f00b4 | |
|---|---|---|
| Author: | Chuck Cho | |
Video-related changes: video reader / IO, test samples, test, etc
| Commit: | a100aad | |
|---|---|---|
| Author: | Chuck Cho | |
Merge remote-tracking branch 'christianpayer/nd-cudnn' into refactor2
| Commit: | 583a965 | |
|---|---|---|
| Author: | Chuck Cho | |
| Committer: | Chuck Cho | |
Merge remote-tracking branch 'blvc/master' (merging latest BVLC caffe up to 7f8f9e146d90172e457678866961b86ae4218824 (2016/09/10))
| Commit: | 5e1f04e | |
|---|---|---|
| Author: | Christian Payer | |
| Committer: | Christian Payer | |
change interface of pool to support n-dimensions support n-dimensional pooling for cudnn caffe cpu and gpu pooling implementations do not work in this revision!
| Commit: | cc357bd | |
|---|---|---|
| Author: | Christian Payer | |
| Committer: | Christian Payer | |
change interface of pool to support n-dimensions support n-dimensional pooling for cudnn caffe cpu and gpu pooling implementations do not work in this revision!
| Commit: | bdb9457 | |
|---|---|---|
| Author: | Alican Bozkurt | |
add default value for rms_decay
| Commit: | 2a3e7da | |
|---|---|---|
| Author: | Chuck Cho | |
Merge the lastest BVLC/caffe as of 2016/06/02. Notable updates are addition of RNN/LSTM layers (yay).
| Commit: | 5f2d845 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
Add RecurrentLayer: an abstract superclass for other recurrent layer types
| Commit: | c419f85 | |
|---|---|---|
| Author: | Jonathan L Long | |
| Committer: | Jonathan L Long | |
add parameter layer for learning any bottom
| Commit: | 859cf6e | |
|---|---|---|
| Author: | Kun Wang | |
Fix an error in the example of ReshapeParameter. * this small mistake may confuse newer.
| Commit: | f154509 | |
|---|---|---|
| Author: | Chuck Cho | |
| Committer: | Chuck Cho | |
Merging latest upstream 8c66fa (https://github.com/BVLC/caffe/commit/8c66fa5f3c04e36bdba11653c41d27ab638571ff)
| Commit: | 74eec0f | |
|---|---|---|
| Author: | Chuck Cho | |
| Committer: | Chuck Cho | |
Minor changes / clean-up's
| Commit: | 77cde9c | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
Net: setting `propagate_down: true` forces backprop
| Commit: | 337b075 | |
|---|---|---|
| Author: | shai | |
upgrading InfogainLoss layer: (1) incorporating Softmax layer to make the gradeint computation robust, much like SoftmaxWithLoss layer (see: http://stackoverflow.com/a/34917052/1714410 for more information). (2) supporting loss along axis
| Commit: | b531b6c | |
|---|---|---|
| Author: | Chuck Cho | |
initial commit -- a near completion of video-friendly caffe
| Commit: | 64e78bd | |
|---|---|---|
| Author: | Jonathan L Long | |
| Committer: | max argus | |
add CropLayer: crop blob to another blob's dimensions with offsets configure offset(s) through proto definition.
| Commit: | 952fd17 | |
|---|---|---|
| Author: | max argus | |
| Committer: | max argus | |
Extend Crop to N-D, changed CropParameter.
| Commit: | ca9fa49 | |
|---|---|---|
| Author: | max argus | |
| Committer: | max argus | |
Crop: fixes, tests and negative axis indexing.
| Commit: | bddd04b | |
|---|---|---|
| Author: | Evan Shelhamer | |
| Committer: | Evan Shelhamer | |
deprecate input fields and upgrade automagically
| Commit: | 00598ca | |
|---|---|---|
| Author: | Evan Shelhamer | |
| Committer: | Evan Shelhamer | |
add InputLayer for Net input Create an input layer to replace oddball Net `input` fields.
| Commit: | 8f847fa | |
|---|---|---|
| Author: | Youssef Kashef | |
| Committer: | Youssef Kashef | |
tranpose parameter added to IP layer to support tied weights in an autoencoder. Arguments to matrix multiplication function are conditioned on this parameter, no actual transposing takes place. test ip gradient computation with transpose on
| Commit: | 0816907 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
Separation and generalization of ChannelwiseAffineLayer into BiasLayer and ScaleLayer. The behavior of ChannelwiseAffineLayer can be reproduced by a ScaleLayer with `scale_param { bias_term: true }`. BiasLayer and ScaleLayer each take 1 or 2 bottoms, with the output having the same shape as the first. The second input -- either another bottom or a learned parameter -- will have its axes (virtually) broadcast and tiled to have the same shape as the first, after which elementwise addition (Bias) or multiplication (Scale) is performed.
| Commit: | ec04197 | |
|---|---|---|
| Author: | Dmytro Mishkin | |
| Committer: | Jeff Donahue | |
Add ChannelwiseAffine for batch norm
| Commit: | a7ac8bc | |
|---|---|---|
| Author: | Evan Shelhamer | |
Merge pull request #3388 from mohomran/exponential_linear_units Exponential Linear Units
| Commit: | 3e3e9ce | |
|---|---|---|
| Author: | Jonathan L Long | |
| Committer: | Jonathan L Long | |
add short description of dilation to caffe.proto
| Commit: | 93bfcb5 | |
|---|---|---|
| Author: | Fisher Yu | |
| Committer: | Jonathan L Long | |
add support for 2D dilated convolution
| Commit: | a668194 | |
|---|---|---|
| Author: | Mohamed Omran | |
| Committer: | Mohamed Omran | |
ELU layer with basic tests
| Commit: | 8b2aa70 | |
|---|---|---|
| Author: | Carl Doersch | |
| Committer: | Carl Doersch | |
Better normalization options for SoftmaxWithLoss layer.
| Commit: | 39f69fb | |
|---|---|---|
| Author: | Jeff Donahue | |
Merge pull request #3229 from cdoersch/batchnorm2 Yet another batch normalization PR
| Commit: | a52ee65 | |
|---|---|---|
| Author: | Carl Doersch | |
| Committer: | Carl Doersch | |
Cleanup batch norm layer, include global stats computation
| Commit: | 0eea815 | |
|---|---|---|
| Author: | Ronghang Hu | |
| Committer: | Ronghang Hu | |
Change solver type to string and provide solver registry
| Commit: | 321720d | |
|---|---|---|
| Author: | Evan Shelhamer | |
Merge pull request #3160 from shelhamer/cudnnV3 Basic cuDNN v3 support
| Commit: | ecac7ff | |
|---|---|---|
| Author: | Simon Layton | |
| Committer: | Evan Shelhamer | |
Initial cuDNN v3 support
| Commit: | 6c02c8b | |
|---|---|---|
| Author: | Tim Meinhardt | |
| Committer: | Tim Meinhardt | |
Add argmax_param axis
| Commit: | 9d8206e | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
Im2col and Convolution layers support N spatial axes
| Commit: | 4c2ff16 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
caffe.proto: generalize ConvolutionParameter to N spatial axes
| Commit: | 251e67a | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
Add TileLayer
| Commit: | 80579b8 | |
|---|---|---|
| Author: | Evan Shelhamer | |
Merge pull request #2032 from jeffdonahue/embed-layer Embed layer for lookup table of one hot encodings
| Commit: | 4e4c89b | |
|---|---|---|
| Author: | PatWie | |
| Committer: | Ronghang Hu | |
Adam solver This commit implements the Adam solver by Kingma et. al for CPU and GPU. All solver parameters are defined in the caffe.proto. This also adds an example for the MNIST dataset.
| Commit: | bb0a90e | |
|---|---|---|
| Author: | Ronghang Hu | |
Merge pull request #2903 from ronghanghu/multi_gpu Multi-GPU Data Parallelism
| Commit: | 0d34d5b | |
|---|---|---|
| Author: | Ronghang Hu | |
| Committer: | Ronghang Hu | |
Data Layers Parallel for Multi-GPU Allow data layers (and also PythonLayer when used as data layer) to be shared among worker solver's training net, and also test net for future-proof if one wants to do Multi-GPU testing. Data layers are locked during forward to ensure sequential forward.
| Commit: | 1ce3380 | |
|---|---|---|
| Author: | Mohamed Omran | |
| Committer: | Matthias Plappert | |
Implement AdaDelta; add test cases; add mnist examples
| Commit: | bcc8f50 | |
|---|---|---|
| Author: | Cyprien Noel | |
| Committer: | Evan Shelhamer | |
Add DataReader for parallel training with one DB session - Make sure each solver accesses a different subset of the data - Sequential reading of DB for performance - Prefetch a configurable amount of data to host memory - Distribute data to solvers in round-robin way for determinism
| Commit: | abe99e8 | |
|---|---|---|
| Author: | Eren Golge | |
| Committer: | Ronghang Hu | |
Implement RMSProp Solver Implement RMSProp solver and cleaned up to adjust to new solver interface that uses accumulated gradients and refactored regularization.
| Commit: | 4d299c3 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
Add EmbedLayer for inner products with sparse input (one-hot vectors), with unit tests
| Commit: | 4227828 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
temporarily switch the snapshot_format default back to BINARYPROTO out of anticipation for user issues due to issue #2885, which causes Caffe to crash when it attempts to snapshot nets with duplicate layer names
| Commit: | ada055b | |
|---|---|---|
| Author: | Eric Tzeng | |
| Committer: | Eric Tzeng | |
Snapshot model weights/solver state to HDF5 files. Summary of changes: - HDF5 helper functions were moved into a separate file util/hdf5.cpp - hdf5_save_nd_dataset now saves n-d blobs, can save diffs instead of data - Minor fix for memory leak in HDF5 functions (delete instead of delete[]) - Extra methods have been added to both Net/Solver enabling snapshotting and restoring from HDF5 files - snapshot_format was added to SolverParameters, with possible values HDF5 or BINARYPROTO (default HDF5) - kMaxBlobAxes was reduced to 32 to match the limitations of HDF5
| Commit: | f973819 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Eric Tzeng | |
add double_data, double_diff to BlobProto for weights/snapshots saved when using Dtype == double
| Commit: | a756cfe | |
|---|---|---|
| Author: | Takuya Narihira | |
| Committer: | Evan Shelhamer | |
PythonLayer takes parameters by string
| Commit: | e7b2b4e | |
|---|---|---|
| Author: | philkr | |
ImageData layer default batch size of 1, and check for zero batch size
| Commit: | 823d055 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
Add ReductionLayer to reduce any number of "tail" axes to a scalar value Currently implements operations SUM, MEAN, ASUM (sum of absolute values), and SUMSQ (sum of squares)
| Commit: | eb442b9 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
FlattenLayer gets a FlattenParameter with an axis, end_axis
| Commit: | 8c72fe3 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
Add LogLayer
| Commit: | aeef453 | |
|---|---|---|
| Author: | Evan Shelhamer | |
Merge pull request #1977 from shelhamer/accum-grad Decouple the computational batch size and minibatch size by accumulating gradients
| Commit: | 8b05a02 | |
|---|---|---|
| Author: | Jeff Donahue | |
Merge pull request #2410 from sguada/datum_transform Datum transform
| Commit: | 41cf06c | |
|---|---|---|
| Author: | Jonathan L Long | |
| Committer: | Evan Shelhamer | |
zero-init param diffs and accumulate gradients (With layers whose backward accumulates gradients), this effectively decouples the computational batch from the SGD minibatch. Each iteration accumulates gradients over iter_size batches, then parameters are updated.
| Commit: | c255709 | |
|---|---|---|
| Author: | Evan Shelhamer | |
Merge pull request #1946 from nickcarlevaris/msra_init Add MSRAFiller, an Xavier-like filler designed for use with ReLUs
| Commit: | 65af68d | |
|---|---|---|
| Author: | Nick Carlevaris-Bianco | |
| Committer: | Evan Shelhamer | |
Added MSRAFiller, an Xavier-like filler designed for use with ReLUs ...instead of tanh. Based on paper: He et al, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification," 2015. - add VarianceNorm option to FillerParameters which allows one to normalize by fan_in, fan_out or their average. - update XavierFiller to use the VarianceNorm option (default behavior unchanged). - add tests for MSRAFiller and XavierFiller.
| Commit: | dbd8319 | |
|---|---|---|
| Author: | Jonathan L Long | |
clean up redundant message comments
| Commit: | 352aef4 | |
|---|---|---|
| Author: | Jeff Donahue | |
Merge pull request #2466 from ducha-aiki/mvn-less Remove unnecessary variance computation from backward in MVN layer
| Commit: | e8d93cb | |
|---|---|---|
| Author: | Jeff Donahue | |
Merge pull request #2095 from mtamburrano/skip_propagate_down_param Added param skip_propagate_down to LayerParameter
| Commit: | b866d14 | |
|---|---|---|
| Author: | Dmytro Mishkin | |
Remove unnecessary variance computation from backward in MVN layer
| Commit: | c7c4c64 | |
|---|---|---|
| Author: | manuele | |
Added "propagate_down" param to LayerParameter
| Commit: | 21032b2 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
Add ReshapeParameter axis and num_axes to reshape only a particular span of the input shape
| Commit: | 4fb3c9e | |
|---|---|---|
| Author: | Simon Safar | |
| Committer: | Jeff Donahue | |
Added a Reshape layer for copying-free modification of blob dimensions.
| Commit: | fa6169e | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
ReshapeLayer fixups for ND blobs
| Commit: | 35a5df5 | |
|---|---|---|
| Author: | Jeff Donahue | |
Merge pull request #2177 from pgao/spp_layer Spatial Pyramid Pooling Layer
| Commit: | 438cf0e | |
|---|---|---|
| Author: | PETER_GAO | |
| Committer: | PETER_GAO | |
Spatial Pyramid Pooling Layer
| Commit: | ca673fd | |
|---|---|---|
| Author: | Nick Carlevaris-Bianco | |
Added support for original implementation, using (margin - d^2), through the legacy_version parameter.
| Commit: | b963008 | |
|---|---|---|
| Author: | Sergio Guadarrama | |
| Committer: | Sergio Guadarrama | |
Allow Transform of encoded datum. Allow initialize transformed_blob from datum or transform params. Allow force_color and force_gray as transform params.
| Commit: | 6fe2b04 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
HDF5DataLayer shuffle: minor cleanup; clarification in HDF5DataParameter
| Commit: | 249aba4 | |
|---|---|---|
| Author: | wieschol | |
| Committer: | Jeff Donahue | |
shuffle data
| Commit: | bb5bf43 | |
|---|---|---|
| Author: | Takuya Narihira | |
| Committer: | Takuya Narihira | |
PReLU Layer and its tests described in Kaiming He et al, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification", arxiv 2015. Belows are commit message histories that I had while developing. PReLULayer takes FillerParameter for init PReLU testing consistency with ReLU Fix : PReLU test concistency check PReLU tests in-place computation, and it failed in GPU Fix: PReLU in-place backward in GPU PReLULayer called an incorrect API for copying data (caffe_gpu_memcpy). First argment of `caffe_gpu_memcpy` should be size of memory region in byte. I modified to use `caffe_copy` function. Fix: style errors Fix: number of axes of input blob must be >= 2 Use 1D blob, zero-D blob. Rename: hw -> dim
| Commit: | 6ea7a66 | |
|---|---|---|
| Author: | max argus | |
| Committer: | Jeff Donahue | |
AccuracyLayer: add ignore_label param
| Commit: | 7a40f74 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
Fixup AccuracyLayer like SoftmaxLossLayer in #1970 -- fixes #2063
| Commit: | 7462c84 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
DummyDataLayer outputs blobs of arbitrary shape
| Commit: | 8afdcd0 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
ConcatLayer: generalized Blob axes
| Commit: | b868916 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
SliceLayer: generalized Blob axes
| Commit: | abec302 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
SoftmaxLayer: generalized Blob axes
| Commit: | 1434e87 | |
|---|---|---|
| Author: | Jeff Donahue | |
| Committer: | Jeff Donahue | |
Blobs are ND arrays (for N not necessarily equals 4). vector<int> shape_ instead of (num, channels, height, width).