These commits are when the Protocol Buffers files have changed: (only the last 100 relevant commits are shown)
Commit: | 364eacf | |
---|---|---|
Author: | dicecco1 |
Merging with latest version of caffe
The documentation is generated from this commit.
Commit: | c709827 | |
---|---|---|
Author: | dicecco1 |
Removing older convolution reference functions, small tweaks to support winograd convolution
Commit: | a27cb34 | |
---|---|---|
Author: | Roberto DiCecco |
half->cpfp renaming
Commit: | a9c1917 | |
---|---|---|
Author: | Roberto DiCecco |
Adding scale parameter for softmax
Commit: | 9112c19 | |
---|---|---|
Author: | Roberto DiCecco |
Updated cr and inner product layer implementations to take advantage of higher bandwidth implementation
Commit: | 963b815 | |
---|---|---|
Author: | Roberto DiCecco |
Fixed bug in conv and inner product where pooling was getting set
Commit: | fb8359e | |
---|---|---|
Author: | Roberto DiCecco |
Committing layers used to implement cr hwcn, updating cr_hwcn_layer to support pooling
Commit: | 7b512b5 | |
---|---|---|
Author: | Roberto DiCecco |
Updated half.hpp to reduce area, added backward pass for ocl_inner_product_layer, updated some small functionality issues in ocl_cr_layer, added another padding option to the pad layer
Commit: | c1ff4fe | |
---|---|---|
Author: | Roberto DiCecco |
Updated ocl_cr_layer with backward support, moved half precision files around, adding ocl_inner_product_layer, adding some small changes to make it possible to train using FPGAs (SGD only for now), updated ocl_conv_layer, need to update testing still for FPGA direct and winograd implementations
Commit: | 2cbc1bb | |
---|---|---|
Author: | Evan Shelhamer | |
Committer: | GitHub |
Merge pull request #3855 from shaibagon/upgrade_infogain InfogainLoss layer can normalize, ignore, and more
Commit: | 1604dd6 | |
---|---|---|
Author: | Roberto DiCecco |
Updated cr layer with flag for relu enable, fixed bug that made launching multiple batches hang
Commit: | 78205f5 | |
---|---|---|
Author: | Roberto DiCecco |
Added pad layer and fused conv-relu layer implementation
Commit: | 02be539 | |
---|---|---|
Author: | Roberto DiCecco |
Added half conversion layer, modified some ocl specific variables, deleted ocl layers that are no longer used
Commit: | 850ffd8 | |
---|---|---|
Author: | Cyprien Noel |
Remove missed legacy parallel code
Commit: | 11930f1 | |
---|---|---|
Author: | Jonathan R. Williford |
Clarify batch norm parameter documentation.
Commit: | 929135b | |
---|---|---|
Author: | Evan Shelhamer | |
Committer: | GitHub |
Merge pull request #5210 from ftokarev/patches Obsolete reference to `bool solver` in caffe.proto
Commit: | 3a0b6c6 | |
---|---|---|
Author: | Fyodor Tokarev | |
Committer: | Fyodor Tokarev |
Update a comment in caffe.proto
Commit: | 3ba2054 | |
---|---|---|
Author: | Cyprien Noel | |
Committer: | Cyprien Noel |
Switched multi-GPU to NCCL
Commit: | db66432 | |
---|---|---|
Author: | Zhou Mo |
fix many typos by using codespell
Commit: | 3d62e3c | |
---|---|---|
Author: | Evan Shelhamer | |
Committer: | Evan Shelhamer |
sigmoid cross-entropy loss: normalize loss by different schemes sig-ce loss handles all the same normalizations as the softmax loss; refer to #3296 for more detail. this preserves the default normalization for sig-ce loss: batch size.
Commit: | 6d48d98 | |
---|---|---|
Author: | dicecco1 |
Added an updated direct convolution kernel and updated the direct conv layer implementations, removed matmul based hooks for now
Commit: | 5f152eb | |
---|---|---|
Author: | dicecco1 |
Updated to most recent caffe branch
Commit: | 363cee8 | |
---|---|---|
Author: | dicecco1 |
Changed ocl layer structure to match cudnn approach
Commit: | bdb9457 | |
---|---|---|
Author: | Alican Bozkurt |
add default value for rms_decay
Commit: | 29429a4 | |
---|---|---|
Author: | Cloud User |
Added support for winograd conv engine and support for other engines
Commit: | 5f2d845 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Add RecurrentLayer: an abstract superclass for other recurrent layer types
Commit: | c419f85 | |
---|---|---|
Author: | Jonathan L Long | |
Committer: | Jonathan L Long |
add parameter layer for learning any bottom
Commit: | 859cf6e | |
---|---|---|
Author: | Kun Wang |
Fix an error in the example of ReshapeParameter. * this small mistake may confuse newer.
Commit: | 003c274 | |
---|---|---|
Author: | Griffin Lacey |
added pipeline layers
Commit: | 77cde9c | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Net: setting `propagate_down: true` forces backprop
Commit: | 337b075 | |
---|---|---|
Author: | shai |
upgrading InfogainLoss layer: (1) incorporating Softmax layer to make the gradeint computation robust, much like SoftmaxWithLoss layer (see: http://stackoverflow.com/a/34917052/1714410 for more information). (2) supporting loss along axis
Commit: | 952fd17 | |
---|---|---|
Author: | max argus | |
Committer: | max argus |
Extend Crop to N-D, changed CropParameter.
Commit: | 64e78bd | |
---|---|---|
Author: | Jonathan L Long | |
Committer: | max argus |
add CropLayer: crop blob to another blob's dimensions with offsets configure offset(s) through proto definition.
Commit: | ca9fa49 | |
---|---|---|
Author: | max argus | |
Committer: | max argus |
Crop: fixes, tests and negative axis indexing.
Commit: | bddd04b | |
---|---|---|
Author: | Evan Shelhamer | |
Committer: | Evan Shelhamer |
deprecate input fields and upgrade automagically
Commit: | 00598ca | |
---|---|---|
Author: | Evan Shelhamer | |
Committer: | Evan Shelhamer |
add InputLayer for Net input Create an input layer to replace oddball Net `input` fields.
Commit: | 8f847fa | |
---|---|---|
Author: | Youssef Kashef | |
Committer: | Youssef Kashef |
tranpose parameter added to IP layer to support tied weights in an autoencoder. Arguments to matrix multiplication function are conditioned on this parameter, no actual transposing takes place. test ip gradient computation with transpose on
Commit: | 0816907 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Separation and generalization of ChannelwiseAffineLayer into BiasLayer and ScaleLayer. The behavior of ChannelwiseAffineLayer can be reproduced by a ScaleLayer with `scale_param { bias_term: true }`. BiasLayer and ScaleLayer each take 1 or 2 bottoms, with the output having the same shape as the first. The second input -- either another bottom or a learned parameter -- will have its axes (virtually) broadcast and tiled to have the same shape as the first, after which elementwise addition (Bias) or multiplication (Scale) is performed.
Commit: | ec04197 | |
---|---|---|
Author: | Dmytro Mishkin | |
Committer: | Jeff Donahue |
Add ChannelwiseAffine for batch norm
Commit: | a7ac8bc | |
---|---|---|
Author: | Evan Shelhamer |
Merge pull request #3388 from mohomran/exponential_linear_units Exponential Linear Units
Commit: | 3e3e9ce | |
---|---|---|
Author: | Jonathan L Long | |
Committer: | Jonathan L Long |
add short description of dilation to caffe.proto
Commit: | 93bfcb5 | |
---|---|---|
Author: | Fisher Yu | |
Committer: | Jonathan L Long |
add support for 2D dilated convolution
Commit: | a668194 | |
---|---|---|
Author: | Mohamed Omran | |
Committer: | Mohamed Omran |
ELU layer with basic tests
Commit: | 6145779 | |
---|---|---|
Author: | dicecco1 |
Merge branch 'master' of https://github.com/BVLC/caffe
Commit: | 8b2aa70 | |
---|---|---|
Author: | Carl Doersch | |
Committer: | Carl Doersch |
Better normalization options for SoftmaxWithLoss layer.
Commit: | ab35841 | |
---|---|---|
Author: | dicecco1 |
Rebasing to latest caffe version, some errors in make runtest, but might be due to centos installation
Commit: | 39f69fb | |
---|---|---|
Author: | Jeff Donahue |
Merge pull request #3229 from cdoersch/batchnorm2 Yet another batch normalization PR
Commit: | a52ee65 | |
---|---|---|
Author: | Carl Doersch | |
Committer: | Carl Doersch |
Cleanup batch norm layer, include global stats computation
Commit: | 0eea815 | |
---|---|---|
Author: | Ronghang Hu | |
Committer: | Ronghang Hu |
Change solver type to string and provide solver registry
Commit: | 321720d | |
---|---|---|
Author: | Evan Shelhamer |
Merge pull request #3160 from shelhamer/cudnnV3 Basic cuDNN v3 support
Commit: | ecac7ff | |
---|---|---|
Author: | Simon Layton | |
Committer: | Evan Shelhamer |
Initial cuDNN v3 support
Commit: | 6c02c8b | |
---|---|---|
Author: | Tim Meinhardt | |
Committer: | Tim Meinhardt |
Add argmax_param axis
Commit: | 9d8206e | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Im2col and Convolution layers support N spatial axes
Commit: | 4c2ff16 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
caffe.proto: generalize ConvolutionParameter to N spatial axes
Commit: | 251e67a | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Add TileLayer
Commit: | 80579b8 | |
---|---|---|
Author: | Evan Shelhamer |
Merge pull request #2032 from jeffdonahue/embed-layer Embed layer for lookup table of one hot encodings
Commit: | 4e4c89b | |
---|---|---|
Author: | PatWie | |
Committer: | Ronghang Hu |
Adam solver This commit implements the Adam solver by Kingma et. al for CPU and GPU. All solver parameters are defined in the caffe.proto. This also adds an example for the MNIST dataset.
Commit: | bb0a90e | |
---|---|---|
Author: | Ronghang Hu |
Merge pull request #2903 from ronghanghu/multi_gpu Multi-GPU Data Parallelism
Commit: | 0d34d5b | |
---|---|---|
Author: | Ronghang Hu | |
Committer: | Ronghang Hu |
Data Layers Parallel for Multi-GPU Allow data layers (and also PythonLayer when used as data layer) to be shared among worker solver's training net, and also test net for future-proof if one wants to do Multi-GPU testing. Data layers are locked during forward to ensure sequential forward.
Commit: | 1ce3380 | |
---|---|---|
Author: | Mohamed Omran | |
Committer: | Matthias Plappert |
Implement AdaDelta; add test cases; add mnist examples
Commit: | bcc8f50 | |
---|---|---|
Author: | Cyprien Noel | |
Committer: | Evan Shelhamer |
Add DataReader for parallel training with one DB session - Make sure each solver accesses a different subset of the data - Sequential reading of DB for performance - Prefetch a configurable amount of data to host memory - Distribute data to solvers in round-robin way for determinism
Commit: | abe99e8 | |
---|---|---|
Author: | Eren Golge | |
Committer: | Ronghang Hu |
Implement RMSProp Solver Implement RMSProp solver and cleaned up to adjust to new solver interface that uses accumulated gradients and refactored regularization.
Commit: | 4d299c3 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Add EmbedLayer for inner products with sparse input (one-hot vectors), with unit tests
Commit: | 4227828 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
temporarily switch the snapshot_format default back to BINARYPROTO out of anticipation for user issues due to issue #2885, which causes Caffe to crash when it attempts to snapshot nets with duplicate layer names
Commit: | ada055b | |
---|---|---|
Author: | Eric Tzeng | |
Committer: | Eric Tzeng |
Snapshot model weights/solver state to HDF5 files. Summary of changes: - HDF5 helper functions were moved into a separate file util/hdf5.cpp - hdf5_save_nd_dataset now saves n-d blobs, can save diffs instead of data - Minor fix for memory leak in HDF5 functions (delete instead of delete[]) - Extra methods have been added to both Net/Solver enabling snapshotting and restoring from HDF5 files - snapshot_format was added to SolverParameters, with possible values HDF5 or BINARYPROTO (default HDF5) - kMaxBlobAxes was reduced to 32 to match the limitations of HDF5
Commit: | f973819 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Eric Tzeng |
add double_data, double_diff to BlobProto for weights/snapshots saved when using Dtype == double
Commit: | a756cfe | |
---|---|---|
Author: | Takuya Narihira | |
Committer: | Evan Shelhamer |
PythonLayer takes parameters by string
Commit: | e7b2b4e | |
---|---|---|
Author: | philkr |
ImageData layer default batch size of 1, and check for zero batch size
Commit: | a8ca6a1 | |
---|---|---|
Author: | dicecco1 |
Updated lrn, relu, pooling layers to be functional using sdaccel, updated proto to handle xclbin and kernel names
Commit: | 069a14a | |
---|---|---|
Author: | Roberto DiCecco |
Adding updates for opencl
Commit: | 823d055 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Add ReductionLayer to reduce any number of "tail" axes to a scalar value Currently implements operations SUM, MEAN, ASUM (sum of absolute values), and SUMSQ (sum of squares)
Commit: | eb442b9 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
FlattenLayer gets a FlattenParameter with an axis, end_axis
Commit: | 8c72fe3 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Add LogLayer
Commit: | aeef453 | |
---|---|---|
Author: | Evan Shelhamer |
Merge pull request #1977 from shelhamer/accum-grad Decouple the computational batch size and minibatch size by accumulating gradients
Commit: | 8b05a02 | |
---|---|---|
Author: | Jeff Donahue |
Merge pull request #2410 from sguada/datum_transform Datum transform
Commit: | 41cf06c | |
---|---|---|
Author: | Jonathan L Long | |
Committer: | Evan Shelhamer |
zero-init param diffs and accumulate gradients (With layers whose backward accumulates gradients), this effectively decouples the computational batch from the SGD minibatch. Each iteration accumulates gradients over iter_size batches, then parameters are updated.
Commit: | c255709 | |
---|---|---|
Author: | Evan Shelhamer |
Merge pull request #1946 from nickcarlevaris/msra_init Add MSRAFiller, an Xavier-like filler designed for use with ReLUs
Commit: | 65af68d | |
---|---|---|
Author: | Nick Carlevaris-Bianco | |
Committer: | Evan Shelhamer |
Added MSRAFiller, an Xavier-like filler designed for use with ReLUs ...instead of tanh. Based on paper: He et al, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification," 2015. - add VarianceNorm option to FillerParameters which allows one to normalize by fan_in, fan_out or their average. - update XavierFiller to use the VarianceNorm option (default behavior unchanged). - add tests for MSRAFiller and XavierFiller.
Commit: | dbd8319 | |
---|---|---|
Author: | Jonathan L Long |
clean up redundant message comments
Commit: | 352aef4 | |
---|---|---|
Author: | Jeff Donahue |
Merge pull request #2466 from ducha-aiki/mvn-less Remove unnecessary variance computation from backward in MVN layer
Commit: | e8d93cb | |
---|---|---|
Author: | Jeff Donahue |
Merge pull request #2095 from mtamburrano/skip_propagate_down_param Added param skip_propagate_down to LayerParameter
Commit: | b866d14 | |
---|---|---|
Author: | Dmytro Mishkin |
Remove unnecessary variance computation from backward in MVN layer
Commit: | c7c4c64 | |
---|---|---|
Author: | manuele |
Added "propagate_down" param to LayerParameter
Commit: | 4fb3c9e | |
---|---|---|
Author: | Simon Safar | |
Committer: | Jeff Donahue |
Added a Reshape layer for copying-free modification of blob dimensions.
Commit: | fa6169e | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
ReshapeLayer fixups for ND blobs
Commit: | 21032b2 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Add ReshapeParameter axis and num_axes to reshape only a particular span of the input shape
Commit: | 35a5df5 | |
---|---|---|
Author: | Jeff Donahue |
Merge pull request #2177 from pgao/spp_layer Spatial Pyramid Pooling Layer
Commit: | 438cf0e | |
---|---|---|
Author: | PETER_GAO | |
Committer: | PETER_GAO |
Spatial Pyramid Pooling Layer
Commit: | ca673fd | |
---|---|---|
Author: | Nick Carlevaris-Bianco |
Added support for original implementation, using (margin - d^2), through the legacy_version parameter.
Commit: | b963008 | |
---|---|---|
Author: | Sergio Guadarrama | |
Committer: | Sergio Guadarrama |
Allow Transform of encoded datum. Allow initialize transformed_blob from datum or transform params. Allow force_color and force_gray as transform params.
Commit: | 6fe2b04 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
HDF5DataLayer shuffle: minor cleanup; clarification in HDF5DataParameter
Commit: | 249aba4 | |
---|---|---|
Author: | wieschol | |
Committer: | Jeff Donahue |
shuffle data
Commit: | bb5bf43 | |
---|---|---|
Author: | Takuya Narihira | |
Committer: | Takuya Narihira |
PReLU Layer and its tests described in Kaiming He et al, "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification", arxiv 2015. Belows are commit message histories that I had while developing. PReLULayer takes FillerParameter for init PReLU testing consistency with ReLU Fix : PReLU test concistency check PReLU tests in-place computation, and it failed in GPU Fix: PReLU in-place backward in GPU PReLULayer called an incorrect API for copying data (caffe_gpu_memcpy). First argment of `caffe_gpu_memcpy` should be size of memory region in byte. I modified to use `caffe_copy` function. Fix: style errors Fix: number of axes of input blob must be >= 2 Use 1D blob, zero-D blob. Rename: hw -> dim
Commit: | 6ea7a66 | |
---|---|---|
Author: | max argus | |
Committer: | Jeff Donahue |
AccuracyLayer: add ignore_label param
Commit: | 7a40f74 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
Fixup AccuracyLayer like SoftmaxLossLayer in #1970 -- fixes #2063
Commit: | 7462c84 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
DummyDataLayer outputs blobs of arbitrary shape
Commit: | abec302 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
SoftmaxLayer: generalized Blob axes
Commit: | 8afdcd0 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
ConcatLayer: generalized Blob axes
Commit: | b868916 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
SliceLayer: generalized Blob axes
Commit: | 29581e6 | |
---|---|---|
Author: | Jeff Donahue | |
Committer: | Jeff Donahue |
InnerProductLayer can multiply along any axis